CN104915981A - Three-dimensional hairstyle design method based on somatosensory sensor - Google Patents

Three-dimensional hairstyle design method based on somatosensory sensor Download PDF

Info

Publication number
CN104915981A
CN104915981A CN201510233589.4A CN201510233589A CN104915981A CN 104915981 A CN104915981 A CN 104915981A CN 201510233589 A CN201510233589 A CN 201510233589A CN 104915981 A CN104915981 A CN 104915981A
Authority
CN
China
Prior art keywords
dimensional
model
dimensional model
head
personage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510233589.4A
Other languages
Chinese (zh)
Inventor
寇懿
寇修君
孙英华
刘燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201510233589.4A priority Critical patent/CN104915981A/en
Publication of CN104915981A publication Critical patent/CN104915981A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a three-dimensional hairstyle design method based on a somatosensory sensor, comprising the following steps: collecting front, left-side, right-side and upward-looking depth image information of a person; establishing a three-dimensional model of the face of the person; combining the three-dimensional model of the face of the person with a standard head model; improving the number and precision of grids of the model by using a method of 'interpolation'; fusing color image information and a standard skin template to generate a model map; superposing a hairstyle three-dimensional model and a head three-dimensional model; and displaying the effect of the three-dimensional model of hairstyle design of the person. The method is simpler and more flexible in operation, and burr, dislocation and other phenomena in the process of model reconstruction are avoided. Meanwhile, the number and precision of grids of the model are improved greatly, the quality of three-dimensional models is improved to the image level, and a new effect is achieved in appearance and experience.

Description

Based on the three-dimensional hairstyling method of body propagated sensation sensor
Technical field
The present invention relates to hair style design technical field, particularly, relate to a kind of three-dimensional hairstyling method based on body propagated sensation sensor.
Background technology
Hairdressing is ingredient indispensable in people's daily life, and the life that it is us adds beautiful in riotous profusion light tone.Go hair salon to get a haircut, we link up often by the description of language or model's picture and stylist, design the hair style oneself wanted.If can see the three-dimensional model image after oneself haircut with multi-angle, the effect of direct feel hair style design, will promote the experience of client and the efficiency of hairdressing.
Patent publication No. 103489219A discloses a kind of 3D hair style emulation mode based on range image analysis, the method obtains the information such as size, position, direction of face by Kinect somatosensory sensor, and is added in camera shooting and video stream after being converted by hair style model according to these information.Although its hair style model is three-dimensional, it is two-dimentional that face shows in video, emulation and laminating effect poor.And the hair style design result that this method generates is not full three-dimensional model, cannot show on the 3D equipment such as 3D display, therefore function is comparatively single.
Patent publication No. 103854303A discloses a kind of 3 D stereo hair style design system and method based on body propagated sensation sensor, it is characterized in that using the personage's 360 ° of three-dimensional datas on body propagated sensation sensor collection rotating mechanism, generate human body head three-dimensional model, again this three-dimensional model is superposed with hair style three-dimensional model, generate the full three-dimensional model result of hair style design emulation.Be subject to the impact of the restriction of existing sensor resolution and human body shake, the human body head three-dimensional model of this kind of method generation has the phenomenons such as uneven, burr and dislocation, makes final experience, perception not good.
Summary of the invention
For above shortcomings in prior art, the invention provides a kind of three-dimensional hairstyling method based on body propagated sensation sensor of improvement.
For achieving the above object, the present invention is achieved by the following technical solutions.
Based on a three-dimensional hairstyling method for body propagated sensation sensor, comprise the following steps:
Step S1, uses the deep image information that body propagated sensation sensor gathers character facial front, left side, right side and looks up;
Step S2, sets up character facial three-dimensional stereo model according to the deep image information obtained in step S1;
Step S3, combines character facial three-dimensional stereo model with standard head model, generates personage's head three-dimensional model;
Step S4, uses interpolation method to improve personage's head three-dimensional model gridding quantity and precision;
Step S5, uses the color image information that body propagated sensation sensor gathers character facial front, left side, right side and looks up; Adopt image interfusion method, color image information and standard skin template are merged the real-time three-dimensional model pinup picture generating and match with personage's head three-dimensional model, and real-time three-dimensional model pinup picture is combined with personage's head three-dimensional model, generate the three-dimensional total model of personage's head; The effect of pinup picture makes model have texture color, namely gives the colouring of personage's head three-dimensional model;
Step S6, superposes three-dimensional to the hair style three-dimensional model in hair style storehouse and the personage's head generated in step S5 total model, generates personage's hair style design three-dimensional model.
Preferably, described interpolation method is specially: on every bar limit of personage's head three-dimensional model gridding, increase a summit respectively, and jointly form multiple new grid with original summit.
Preferably, described image interfusion method is specially, and the character facial detailed information extract current color image information and standard skin template merge mutually, generate the real-time three-dimensional model pinup picture matched with personage's head three-dimensional model; Utilize the corresponding relation of coloured image and depth image, determine the UV coordinate of real-time three-dimensional model pinup picture.
Preferably, the process of described mutual fusion comprises: color images, edge Gaussian Blur, adjustment color color range, saturation degree and brightness, to splice merge with standard skin template image; Wherein: color images adopts and does Da-Jin algorithm Iamge Segmentation to the Cr passage of YCrCb color space, again Gaussian Blur is done to the image border after segmentation, desalination contour edge, then according to the difference in skin color in coloured image and standard skin template between skin color, adjustment color color range, saturation degree and brightness, make more to mate between skin color with skin color in standard skin template in coloured image, finally, by coloured image split on standard skin template image.
Preferably, the body propagated sensation sensor of employing comprises integrated depth image sensor and color image sensor.
Preferably, the character facial front that described step S2 gathers according to step S1, left side, right side and the deep image information looked up, first identifying position and the three-D profile information of the eyes for embodying character facial feature, nose, mouth and shape of face, recycling these positions and three-D profile information reconstruction character facial three-dimensional model.
Preferably, described standard head model is the human head model of the equalization set up in advance; The three-dimensional data of standard head model, after convergent-divergent, rotation, translation transformation, aligns with character facial three-dimensional stereo model, coincide, then refresh the face portion of standard head model, generate personage's head three-dimensional model with character facial feature.
Preferably, also comprise the steps:
Step S7, shows personage's hair style design three-dimensional model effect for 360 °.
Described personage's hair style design three-dimensional model effect is by following 360 °, any one terminal displaying:
-common liquid crystals TV;
-display;
-3D TV;
-3D polarization projection arrangement;
-3D holographic projector;
-3D Helmet Mounted Display.
Preferably, when showing in 3D terminal, generating anaglyph corresponding to images of left and right eyes according to personage's hair style design three-dimensional model, then outputting in corresponding 3D terminal.
Compared with prior art, the present invention has following beneficial effect:
1, the present invention's deep image information of only gathering character facial front, left side, right side and looking up, operation is easier, flexible compared with 360 ° of scannings;
2, after the present invention sets up character facial three-dimensional model, then combine with standard head model, compared with 360 ° of scannings, avoid the phenomenons such as model dislocation and burr;
3, the present invention makes model meshes quantity and precision have large increase by methods such as " interpolation ";
4, the present invention makes the effect promoting of personage's head reconstructing three-dimensional model arrive the level of photo level, make the perception of three-dimensional hair style design model, experience reach unprecedented effect, too increase the space of the multiple application extension such as the displaying of 3D display device, the making of hair style design picture album simultaneously.
Accompanying drawing explanation
By reading the detailed description done non-limiting example with reference to the following drawings, other features, objects and advantages of the present invention will become more obvious:
Fig. 1 is method flow diagram of the present invention;
Fig. 2 is interpolation method of the present invention (for triangle) schematic diagram;
Fig. 3 is enforcement schematic diagram of the present invention.
In figure: 1 is target person, 2 is three-dimensional model display unit, and 3 is body propagated sensation sensor cell, and 4 is light source mechanism, and 5 is three-dimensional model construction unit.
Embodiment
Below embodiments of the invention are elaborated: the present embodiment is implemented under premised on technical solution of the present invention, give detailed embodiment and concrete operating process.It should be pointed out that to those skilled in the art, without departing from the inventive concept of the premise, can also make some distortion and improvement, these all belong to protection scope of the present invention.
Embodiment
Present embodiments provide a kind of three-dimensional hairstyling method based on body propagated sensation sensor, comprise the following steps:
Step S1, uses the deep image information that body propagated sensation sensor gathers character facial front, left side, right side and looks up; Involved body propagated sensation sensor is a kind of body propagated sensation sensor cell being integrated with depth image sensor and color image sensor;
Step S2, generates character facial three-dimensional model; The character facial front gathered according to step S1, left side, right side and the deep image information looked up, first position and the three-D profile of face's principal characters such as personage's eyes, nose, mouth and shape of face is identified, then with these information reconstruction character facial three-dimensional models;
Step S3, combines character facial three-dimensional model with standard head model, generates personage's head three-dimensional model; Standard head model is the human head model of the equalization set up in advance, align, coincide with face three-dimensional model with standard head three-dimensional modeling data after the conversion such as convergent-divergent, rotation, translation, refresh the face portion of standard head three-dimensional model again, reach the effect of " changing face " to standard head model;
Step S4, takes the method for " interpolation " to improve personage's head three-dimensional model gridding quantity and precision; So-called " interpolation " method, refer to increases a summit respectively on every bar limit of personage's head three-dimensional model gridding, and jointly forms multiple new grid with original summit; As shown in Figure 2, for triangle, personage's head three-dimensional model gridding three sides of a triangle increase a summit (P12, P13, P23) respectively, and jointly forms multiple new mesh triangles shape (T1, T2, T3, T4) with original summit.Use the method successive ignition, the object improving model meshes quantity and precision can be reached; When personage's head three-dimensional model be quadrilateral or polygonal mesh time, quadrilateral or polygonal every bar limit increase a summit respectively, and jointly form multiple new grid quadrilateral or polygon with original summit.
Step S5, use image interfusion method, according to the real-time three-dimensional model pinup picture that color image information and standard skin template generation and personage's head three-dimensional model match, and real-time three-dimensional model pinup picture is combined with personage's head three-dimensional model, generate the three-dimensional total model of personage's head; Utilize the corresponding relation of coloured image and depth image, determine pinup picture UV coordinate; Fusion process comprises: color images, edge Gaussian Blur, adjustment color color range, saturation degree and brightness, to splice merge with standard skin template image; Wherein, color images adopts and does Da-Jin algorithm Iamge Segmentation to the Cr passage of YCrCb color space, again Gaussian Blur is done to the image border after segmentation, desalination contour edge, then according to the difference in skin color in coloured image and standard skin template between skin color, adjustment color range, saturation degree and brightness, make more to mate between them, finally, by coloured image split on standard skin template image;
Step S6, by superimposed for the three-dimensional total model of personage's head generated in the hair style three-dimensional model in hair style storehouse and step S5; The HairWorks Rendering adopting NVIDIA company played up by hair style three-dimensional model in hair style storehouse, also can adopt the TressFX Rendering of AMD.
Step S7, shows personage's hair style design three-dimensional model effect for 360 °; Final complete three-dimensional hair style design model result not only can be shown on common liquid crystals TV, display, also can show on the 3D equipment such as 3D TV, polarization projection, holographic projector; When being illustrated on 3D equipment, anaglyph that only need be corresponding according to three-dimensional hair style design model generation images of left and right eyes, then output on relevant device.Meanwhile, final complete three-dimensional hair style design model can be made into the plane media such as exquisite photograph, picture album, appreciates and preserve for people.
Below in conjunction with accompanying drawing, the invention will be further described.
As shown in Figure 3, a kind of three-dimensional hairstyling method based on body propagated sensation sensor of improvement, adopts as lower device: three-dimensional model displaying terminal 2, body propagated sensation sensor 3, light source mechanism 4, three-dimensional model construction unit 5.Described body propagated sensation sensor can be the kinect sensor of Microsoft's exploitation, also can be the body propagated sensation sensor of HuaShuo Co., Ltd or the exploitation of other companies, this body propagated sensation sensor be a kind of body propagated sensation sensor cell being integrated with depth image sensor and color image sensor; Described light source mechanism is iodine-tungsten lamp or LED light source; Described three-dimensional model display unit can be common liquid crystals TV, display, also can be the 3D presentation devices such as 3D TV, polarization projection, holographic projector.
When the method that the present embodiment provides uses, first target person 1 is allowed to stand in body propagated sensation sensor 3 front, after waiting for that body propagated sensation sensor searches objects ahead, first body propagated sensation sensor is faced according to prompting, respectively head slowly turned to left side, right side again and look up, in the process, body propagated sensation sensor gathers character facial front, left side, right side and the deep image information looked up.Before the Information Monitoring of body propagated sensation sensor, can at personage's head-mount for retraining and cover the rubber headgear of original hair;
Three-dimensional model construction unit 5 is according to the character facial front gathered before, left side, right side and the deep image information looked up, first position and the three-D profile of face's principal characters such as personage's eyes, nose, mouth and shape of face is identified, then with these information reconstruction character facial three-dimensional models;
The character facial three-dimensional model generated before combines with standard head model by three-dimensional model construction unit 5, and takes the method for " interpolation " to improve personage's head three-dimensional model gridding quantity and precision;
The real-time three-dimensional model pinup picture that three-dimensional model construction unit 5 matches according to color image information and standard skin template generation and personage's head three-dimensional model;
Three-dimensional model construction unit 5 by the hair style three-dimensional model in hair style storehouse and personage's head three-dimensional model superimposed, and final hair style design result is shown to three-dimensional model displaying terminal; The HairWorks Rendering adopting NVIDIA company played up by hair style three-dimensional model in hair style storehouse, also can adopt the TressFX Rendering of AMD.
By body propagated sensation sensor 3, target person can use gesture and control to switch the three-dimensional model in three-dimensional model display unit, during as lifted left hand, switches to next hair style, when lifting the right hand, and current three-dimensional model rotary display.
In the present embodiment:
Step S1, uses the deep image information that body propagated sensation sensor gathers character facial front, left side, right side and looks up;
Step S2, generates character facial three-dimensional model;
Step S3, combines character facial three-dimensional model with standard head three-dimensional model, generates personage's head three-dimensional model;
Step S4, takes the method for " interpolation " to improve personage's head three-dimensional model gridding quantity and precision;
Step S5, use image interfusion method, according to the real-time three-dimensional model pinup picture that color image information and standard skin template generation and personage's head three-dimensional model match, and real-time three-dimensional model pinup picture is combined with personage's head three-dimensional model, generate the three-dimensional total model of personage's head;
Step S6, by superimposed for three-dimensional to the hair style three-dimensional model in hair style storehouse and the personage's head made total model;
Step S7, shows personage's hair style design three-dimensional model effect for 360 °.
Involved body propagated sensation sensor is characterized in that the sensor unit being integrated with depth image sensor and color image sensor.
Gather the deep image information of character facial instead of the information of whole head.After generating face's three-dimensional model, then refresh standard head three-dimensional model by face's three-dimensional data, reach the effect of " changing face " to standard head model.
The method of " interpolation " is taked to improve personage's head three-dimensional model gridding quantity and precision.So-called " interpolation " method, refer to increases a summit respectively on personage's head three-dimensional model gridding three sides of a triangle, and jointly forms multiple new mesh triangles shape with original summit.Use the method successive ignition, the object improving model meshes quantity and precision can be reached.
Use image interfusion method, the character facial detailed information of current color image zooming-out and standard skin template image are merged mutually, generate the real-time three-dimensional model pinup picture matched with personage's head three-dimensional model.Utilize the corresponding relation of coloured image and depth image, determine pinup picture UV coordinate.Fusion process comprises: color images, edge Gaussian Blur, adjustment color color range, saturation degree and brightness, to splice merge with standard skin template image.
Final complete three-dimensional hair style design model result not only can be shown on common liquid crystals TV, display, also can show on the 3D equipment such as 3D TV, polarization projection, holographic projector.When being illustrated on 3D equipment, anaglyph that only need be corresponding according to three-dimensional hair style design model generation images of left and right eyes, then output on relevant device.
Final complete three-dimensional hair style design modelling becomes the plane media such as exquisite photograph, picture album, appreciates and preserve for people.
The three-dimensional hairstyling method based on body propagated sensation sensor that the present embodiment provides, 1) deep image information using body propagated sensation sensor to gather personage front, left side, right side and look up, generates character facial three-dimensional model; 2) character facial three-dimensional model is combined with standard head three-dimensional model, generate personage's head three-dimensional model; 3) for the problem that model accuracy is inadequate, the method for " interpolation " is taked to improve number of grid and precision; 4) image interfusion method is used, according to the real-time three-dimensional model pinup picture that color image information and standard skin template generation and personage's head three-dimensional model match; 5) final complete three-dimensional hair style design model result not only can be shown on common liquid crystals TV, display, also can show on the 3D equipment such as 3D TV, holographic projector.The present embodiment compared to existing technology, operates easier, flexible, avoids the phenomenons such as the burr in Model Reconstruction process, dislocation.Meanwhile, model meshes quantity and precision have had large increase, and the increased quality of three-dimensional model, to photo level level, makes perception and experience to reach new effect.
Above specific embodiments of the invention are described.It is to be appreciated that the present invention is not limited to above-mentioned particular implementation, those skilled in the art can make various distortion or amendment within the scope of the claims, and this does not affect flesh and blood of the present invention.

Claims (10)

1. based on a three-dimensional hairstyling method for body propagated sensation sensor, it is characterized in that, comprise the following steps:
Step S1, uses the deep image information that body propagated sensation sensor gathers character facial front, left side, right side and looks up;
Step S2, sets up character facial three-dimensional stereo model according to the deep image information obtained in step S1;
Step S3, combines character facial three-dimensional stereo model with standard head model, generates personage's head three-dimensional model;
Step S4, uses interpolation method to improve personage's head three-dimensional model gridding quantity and precision;
Step S5, uses the color image information that body propagated sensation sensor gathers character facial front, left side, right side and looks up; Adopt image interfusion method, color image information and standard skin template are merged the real-time three-dimensional model pinup picture generating and match with personage's head three-dimensional model, and real-time three-dimensional model pinup picture is combined with personage's head three-dimensional model, generate the three-dimensional total model of personage's head;
Step S6, superposes three-dimensional to the hair style three-dimensional model in hair style storehouse and the personage's head generated in step S5 total model, generates personage's hair style design three-dimensional model.
2. the three-dimensional hairstyling method based on body propagated sensation sensor according to claim 1, it is characterized in that, described interpolation method is specially: on every bar limit of personage's head three-dimensional model gridding, increase a summit respectively, and jointly form multiple new grid with original summit.
3. the three-dimensional hairstyling method based on body propagated sensation sensor according to claim 1, it is characterized in that, described image interfusion method is specially, the character facial detailed information extract current color image information and standard skin template merge mutually, generate the real-time three-dimensional model pinup picture matched with personage's head three-dimensional model; Utilize the corresponding relation of coloured image and depth image, determine the UV coordinate of real-time three-dimensional model pinup picture.
4. the three-dimensional hairstyling method based on body propagated sensation sensor according to claim 3, it is characterized in that, the process of described mutual fusion comprises: color images, edge Gaussian Blur, adjustment color color range, saturation degree and brightness, to splice merge with standard skin template image; Wherein: color images adopts and does Da-Jin algorithm Iamge Segmentation to the Cr passage of YCrCb color space, again Gaussian Blur is done to the image border after segmentation, desalination contour edge, then according to the difference in skin color in coloured image and standard skin template between skin color, adjustment color color range, saturation degree and brightness, make more to mate between skin color with skin color in standard skin template in coloured image, finally, by coloured image split on standard skin template image.
5. the three-dimensional hairstyling method based on body propagated sensation sensor according to claim 1, is characterized in that, the body propagated sensation sensor of employing comprises integrated depth image sensor and color image sensor.
6. the three-dimensional hairstyling method based on body propagated sensation sensor according to claim 1, it is characterized in that, the character facial front that described step S2 gathers according to step S1, left side, right side and the deep image information looked up, first identifying position and the three-D profile information of the eyes for embodying character facial feature, nose, mouth and shape of face, recycling these positions and three-D profile information reconstruction character facial three-dimensional model.
7. the three-dimensional hairstyling method based on body propagated sensation sensor according to claim 1, is characterized in that, described standard head model is the human head model of the equalization set up in advance; The three-dimensional data of standard head model, after convergent-divergent, rotation, translation transformation, aligns with character facial three-dimensional stereo model, coincide, then refresh the face portion of standard head model, generate personage's head three-dimensional model with character facial feature.
8. the three-dimensional hairstyling method based on body propagated sensation sensor according to claim 1, is characterized in that, also comprise the steps:
Step S7, shows personage's hair style design three-dimensional model effect for 360 °.
9. the three-dimensional hairstyling method based on body propagated sensation sensor according to claim 8, is characterized in that, described personage's hair style design three-dimensional model effect is by following 360 °, any one terminal displaying:
-common liquid crystals TV;
-display;
-3D TV;
-3D polarization projection arrangement;
-3D holographic projector;
-3D Helmet Mounted Display.
10. the three-dimensional hairstyling method based on body propagated sensation sensor according to claim 9, it is characterized in that, when showing in 3D terminal, generating anaglyph corresponding to images of left and right eyes according to personage's hair style design three-dimensional model, then outputting in corresponding 3D terminal.
CN201510233589.4A 2015-05-08 2015-05-08 Three-dimensional hairstyle design method based on somatosensory sensor Pending CN104915981A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510233589.4A CN104915981A (en) 2015-05-08 2015-05-08 Three-dimensional hairstyle design method based on somatosensory sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510233589.4A CN104915981A (en) 2015-05-08 2015-05-08 Three-dimensional hairstyle design method based on somatosensory sensor

Publications (1)

Publication Number Publication Date
CN104915981A true CN104915981A (en) 2015-09-16

Family

ID=54085017

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510233589.4A Pending CN104915981A (en) 2015-05-08 2015-05-08 Three-dimensional hairstyle design method based on somatosensory sensor

Country Status (1)

Country Link
CN (1) CN104915981A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105513013A (en) * 2016-01-18 2016-04-20 王雨轩 Method for compounding hair styles in mobile phone pictures
CN106127861A (en) * 2016-06-29 2016-11-16 黄丽英 Wearable hair style analog and analog control system thereof
CN106108349A (en) * 2016-06-29 2016-11-16 黄丽英 3 D stereo hair style analog and analog control system thereof
CN106136559A (en) * 2016-06-29 2016-11-23 黄丽英 Self-service hair style analogue means and analog control system thereof
CN106529502A (en) * 2016-08-01 2017-03-22 深圳奥比中光科技有限公司 Lip language identification method and apparatus
CN107045385A (en) * 2016-08-01 2017-08-15 深圳奥比中光科技有限公司 Lip reading exchange method and lip reading interactive device based on depth image
CN107194981A (en) * 2017-04-18 2017-09-22 武汉市爱米诺网络科技有限公司 Hair style virtual display system and its method
CN107343151A (en) * 2017-07-31 2017-11-10 广东欧珀移动通信有限公司 image processing method, device and terminal
CN107560663A (en) * 2017-07-21 2018-01-09 深圳市易成自动驾驶技术有限公司 Ambient parameter detection method and system, storage medium
CN110910487A (en) * 2018-09-18 2020-03-24 Oppo广东移动通信有限公司 Construction method, construction apparatus, electronic apparatus, and computer-readable storage medium
US10789784B2 (en) 2018-05-23 2020-09-29 Asustek Computer Inc. Image display method, electronic device, and non-transitory computer readable recording medium for quickly providing simulated two-dimensional head portrait as reference after plastic operation
CN111899159A (en) * 2020-07-31 2020-11-06 北京百度网讯科技有限公司 Method, device, apparatus and storage medium for changing hairstyle
CN112991523A (en) * 2021-04-02 2021-06-18 福建天晴在线互动科技有限公司 Efficient and automatic hair matching head type generation method and generation device thereof
CN113269888A (en) * 2021-05-25 2021-08-17 山东大学 Hairstyle three-dimensional modeling method, character three-dimensional modeling method and system
US11120624B2 (en) 2018-05-23 2021-09-14 Asustek Computer Inc. Three-dimensional head portrait generating method and electronic device
CN113450899A (en) * 2021-06-22 2021-09-28 上海市第一人民医院 Intelligent diagnosis guiding method based on artificial intelligence cardiopulmonary examination images

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013024847A1 (en) * 2011-08-18 2013-02-21 シャープ株式会社 Stereoscopic image generating device, stereoscopic image display device, stereoscopic image generating method, and program
CN103854303A (en) * 2014-03-06 2014-06-11 寇懿 Three-dimensional hair style design system and method based on somatosensory sensor
CN104574306A (en) * 2014-12-24 2015-04-29 掌赢信息科技(上海)有限公司 Face beautifying method for real-time video and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013024847A1 (en) * 2011-08-18 2013-02-21 シャープ株式会社 Stereoscopic image generating device, stereoscopic image display device, stereoscopic image generating method, and program
CN103854303A (en) * 2014-03-06 2014-06-11 寇懿 Three-dimensional hair style design system and method based on somatosensory sensor
CN104574306A (en) * 2014-12-24 2015-04-29 掌赢信息科技(上海)有限公司 Face beautifying method for real-time video and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
汪丰等: ""基于正交图像的头部三维模型构建"", 《东南大学学报( 自然科学版)》 *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105513013A (en) * 2016-01-18 2016-04-20 王雨轩 Method for compounding hair styles in mobile phone pictures
CN105513013B (en) * 2016-01-18 2018-05-18 王雨轩 A kind of picture of mobile telephone hair style synthetic method
CN106127861B (en) * 2016-06-29 2019-02-19 黄丽英 Wearable hair style simulator and its analog control system
CN106127861A (en) * 2016-06-29 2016-11-16 黄丽英 Wearable hair style analog and analog control system thereof
CN106108349A (en) * 2016-06-29 2016-11-16 黄丽英 3 D stereo hair style analog and analog control system thereof
CN106136559A (en) * 2016-06-29 2016-11-23 黄丽英 Self-service hair style analogue means and analog control system thereof
CN106136559B (en) * 2016-06-29 2019-07-16 南宁远卓新能源科技有限公司 Self-service hair style simulator and its analog control system
CN106108349B (en) * 2016-06-29 2019-06-14 黄丽英 3 D stereo hair style simulator and its analog control system
CN106529502A (en) * 2016-08-01 2017-03-22 深圳奥比中光科技有限公司 Lip language identification method and apparatus
CN107045385A (en) * 2016-08-01 2017-08-15 深圳奥比中光科技有限公司 Lip reading exchange method and lip reading interactive device based on depth image
CN107194981A (en) * 2017-04-18 2017-09-22 武汉市爱米诺网络科技有限公司 Hair style virtual display system and its method
CN107560663A (en) * 2017-07-21 2018-01-09 深圳市易成自动驾驶技术有限公司 Ambient parameter detection method and system, storage medium
CN107343151A (en) * 2017-07-31 2017-11-10 广东欧珀移动通信有限公司 image processing method, device and terminal
CN107343151B (en) * 2017-07-31 2019-07-19 Oppo广东移动通信有限公司 Image processing method, device and terminal
US10789784B2 (en) 2018-05-23 2020-09-29 Asustek Computer Inc. Image display method, electronic device, and non-transitory computer readable recording medium for quickly providing simulated two-dimensional head portrait as reference after plastic operation
US11120624B2 (en) 2018-05-23 2021-09-14 Asustek Computer Inc. Three-dimensional head portrait generating method and electronic device
CN110910487B (en) * 2018-09-18 2023-07-25 Oppo广东移动通信有限公司 Construction method, construction device, electronic device, and computer-readable storage medium
CN110910487A (en) * 2018-09-18 2020-03-24 Oppo广东移动通信有限公司 Construction method, construction apparatus, electronic apparatus, and computer-readable storage medium
CN111899159A (en) * 2020-07-31 2020-11-06 北京百度网讯科技有限公司 Method, device, apparatus and storage medium for changing hairstyle
CN111899159B (en) * 2020-07-31 2023-12-22 北京百度网讯科技有限公司 Method, device, apparatus and storage medium for changing hairstyle
CN112991523A (en) * 2021-04-02 2021-06-18 福建天晴在线互动科技有限公司 Efficient and automatic hair matching head type generation method and generation device thereof
CN112991523B (en) * 2021-04-02 2023-06-30 福建天晴在线互动科技有限公司 Efficient and automatic hair matching head shape generation method and generation device thereof
CN113269888B (en) * 2021-05-25 2022-08-19 山东大学 Hairstyle three-dimensional modeling method, character three-dimensional modeling method and system
CN113269888A (en) * 2021-05-25 2021-08-17 山东大学 Hairstyle three-dimensional modeling method, character three-dimensional modeling method and system
CN113450899A (en) * 2021-06-22 2021-09-28 上海市第一人民医院 Intelligent diagnosis guiding method based on artificial intelligence cardiopulmonary examination images

Similar Documents

Publication Publication Date Title
CN104915981A (en) Three-dimensional hairstyle design method based on somatosensory sensor
US10684467B2 (en) Image processing for head mounted display devices
CN105404392B (en) Virtual method of wearing and system based on monocular cam
US10540817B2 (en) System and method for creating a full head 3D morphable model
CN106157359B (en) Design method of virtual scene experience system
CN104008569B (en) A kind of 3D scene generating method based on deep video
CN103854303A (en) Three-dimensional hair style design system and method based on somatosensory sensor
KR20180108709A (en) How to virtually dress a user's realistic body model
CN105913416A (en) Method for automatically segmenting three-dimensional human face model area
CN104599317B (en) A kind of mobile terminal and method for realizing 3D scanning modeling functions
CN110246209B (en) Image processing method and device
CN101968892A (en) Method for automatically adjusting three-dimensional face model according to one face picture
CN109255841A (en) AR image presentation method, device, terminal and storage medium
CN109147037A (en) Effect processing method, device and electronic equipment based on threedimensional model
CN104091366B (en) Three-dimensional intelligent digitalization generation method and system based on two-dimensional shadow information
CN113362422B (en) Shadow robust makeup transfer system and method based on decoupling representation
CN107194981A (en) Hair style virtual display system and its method
CN110197532A (en) System, method, apparatus and the computer storage medium of augmented reality meeting-place arrangement
CN114821675A (en) Object processing method and system and processor
Wang et al. Wuju opera cultural creative products and research on visual image under VR technology
CN104680574A (en) Method for automatically generating 3D face according to photo
CN110730303B (en) Image hair dyeing processing method, device, terminal and storage medium
CN111652022B (en) Image data display method, image data live broadcast device, electronic equipment and storage medium
WO2020119518A1 (en) Control method and device based on spatial awareness of artificial retina
CN104091318B (en) A kind of synthetic method of Chinese Sign Language video transition frame

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20150916

RJ01 Rejection of invention patent application after publication