CN104376599A - Handy three-dimensional head model generation system - Google Patents

Handy three-dimensional head model generation system Download PDF

Info

Publication number
CN104376599A
CN104376599A CN201410752439.XA CN201410752439A CN104376599A CN 104376599 A CN104376599 A CN 104376599A CN 201410752439 A CN201410752439 A CN 201410752439A CN 104376599 A CN104376599 A CN 104376599A
Authority
CN
China
Prior art keywords
dimensional head
head model
dimensional
model
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410752439.XA
Other languages
Chinese (zh)
Inventor
许奇明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SUZHOU LIDUO NETWORK TECHNOLOGY Co Ltd
Original Assignee
SUZHOU LIDUO NETWORK TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SUZHOU LIDUO NETWORK TECHNOLOGY Co Ltd filed Critical SUZHOU LIDUO NETWORK TECHNOLOGY Co Ltd
Priority to CN201410752439.XA priority Critical patent/CN104376599A/en
Publication of CN104376599A publication Critical patent/CN104376599A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a handy three-dimensional head model generation system for use in common indoor environments and for applications, such as virtual reality and personalized animation. The handy three-dimensional head model generation system comprises a hardware portion and a software portion; the hardware portion comprises a depth sensor provided with a color camera; the software portion includes the steps of 1, rebuilding an initial three-dimensional head model through a front face image acquired by the color camera; 2, extracting facial feature points from the front face image; 3, estimating a coefficient of a radial basis function through the initial three-dimensional head model, the face feature points and the depth image acquired by the depth sensor; and 4, optimizing and rebuilding the initial three-dimensional head model on basis of the radial basis function.

Description

A kind of easy three-dimensional head model generation system
Technical field
The invention belongs to the technical field of image procossing and 3D data acquisition, relate to a kind of easy three-dimensional head model generation system particularly.
Background technology
In recent years, along with the development of hardware technology and the raising of computer process ability, the demand that people obtain three-dimensional data is in continuous growth, and wherein using the equipment of low cost to obtain head three-dimensional modeling data easily and fast becomes a popular investigation and application direction.
Head three-dimensional modeling data acquisition method roughly can be divided into two classes by the difference according to used sensor.The first kind utilizes initiatively range sensor.Initiatively range sensor is by obtaining the range information of scene to environment emitting electromagnetic wave homenergic to its reflected signal analysis.White light structure light scan equipment is the equipment of wherein main flow, such as Cyberware PX.Although laser scanner can obtain very high-precision reconstructed results, but the heavy and price very expensive (1,000,000 yuans of ranks) of this kind equipment, in addition this kind equipment can only obtain three-dimensional structure, cannot obtain head data texturing, because which limit its widespread use.White light method of structured light can obtain high-quality band texture reconstructing three-dimensional model result, as Artec EVA.But this kind equipment can launch the visible ray of high brightness in scanning process, can not feel like oneself when human eye is looked at straight, therefore the general requirement user eye closing when adopting this kind equipment, thus reduce the experience sense of user.The Facing material that this kind equipment treats scanning object has comparatively harsh requirement, and the dark areas such as head of hair all cannot scan rebuilding accurately, generally needs by manual software aftertreatment.In addition this kind of equipment price also higher (more than ten ten thousand yuans), is not suitable for consumers in general and uses.Engender after entering 21 century that a kind of passing through measures light flight time (Time of Flight, TOF) video camera of scene three-dimensional structure is obtained, representative products has the SR4000 of Mesa Imaging company, this series products obtains three-dimensional data comparatively reliably with relatively low price (tens thousand of yuans of ranks), but the defect of this kind equipment is that the data spatial resolution obtained is very low, about a general frame 3D rendering only has 20,000 pixels, the demand of head obtaining three-dimensional model cannot be met completely.In recent years, the miniaturization infrared structure light shadow casting technique being representative with the Kinect of Microsoft moves to maturity, this kind equipment can obtain more high spatial resolution three-dimensional data (about 300,000 pixels) with lower price (about 2,000 yuans), but the spatial resolution of this kind equipment still cannot meet the demand that high precision human face three-dimensional model obtains.In order to alleviate the low problem of spatial resolution, the people such as Newcombe propose scheme (the Newcombe R that hand-held Kinect device carries out scanning around face, Izadi S, Hilliges O, Molyneaux D, Kim D, Davison A, Kohli P, Shotton J, Hodges S, Fitzgibbon A. KinectFusion:Real-Time Dense Surface Mapping and Tracking. IEEE ISMAR, 2011.), but the method needs to be kept static about 20 seconds by sweep object, this is inapplicable for the poor crowd of the self-control such as child or old man.
Another kind of method for reconstructing three-dimensional model depends on passive range sensor, such as digital camera.Current digital camera technology is very ripe, and the camera of about general thousand yuan can reach spatial resolution in ten million Pixel-level, far above above-mentioned active range sensor.Therefore from the angle of spatial resolution, the three-dimensional face model that the stereovision technique based on two digital cameras is very suitable for low-cost and high-precision obtains application.But stereo visual system requires scene texture comparatively horn of plenty, and the 3D data stability obtained at weak texture region is not good, therefore hinders the practical application that stereo visual system obtains at three-dimensional face model.
Summary of the invention
The problem that technology of the present invention solves is: the three-dimensional head model acquisition technique based on active range sensor nearly all at present exists expensive, that Consumer's Experience sense poor or spatial resolution is not enough problem; Based on the three-dimensional head model acquisition technique problem that then existence and stability is not high of passive range sensor.Present invention utilizes depth image and the color image data of the depth transducer equipment of low cost, achieve the head obtaining three-dimensional model system of a kind of low cost, transient state shooting, high reliability.
For achieving the above object, the technical solution used in the present invention is as follows.Described system comprises following hardware device: depth transducer equipment.Depth transducer equipment comprises synchronous colour TV camera and can obtain the corresponding relation between depth image and color image pixel.The RealSense sensor of Microsoft's Kinect sensor or Intel can be adopted specifically.1 frame depth data frame and 1 corresponding color image frame frame is obtained by computing machine sync pulse jamming.Then utilize software algorithm to process above-mentioned data, key step is as follows: utilize the front face image of colour TV camera collection to rebuild initial three-dimensional head model by 3D Deformable models (3D Morphable Model); In conjunction with the semantic information on summit in initial three-dimensional head model, the unique point of definition face area, and use active shape model technology (Active Shape Model, ASM) to obtain the coordinate of unique point in face coloured image; Utilize the corresponding relation between depth image and color image pixel to obtain the three-dimensional coordinate at unique point place, in this three-dimensional coordinate and initial three-dimensional head model unique point three-dimensional coordinate based on calculate the linear combination coefficient of radial basis function; Finally utilize the linear combination coefficient calculated to be optimized adjustment by radial basis function technology to the vertex position in initial three-dimensional head model and obtain final three-dimensional head model.
Compared with prior art, the advantage that has of the present invention and effect as follows: Whole Equipment cost low (about 2,000 yuans); Acquisition speed high (having taken instantaneously), without visible radiation, thus Consumer's Experience sense is good; Stability strong (normal indoor photoenvironment, without the need to the erection of special light, and the model surface generated is smooth).
Accompanying drawing explanation
Fig. 1 is software systems process flow diagram of the present invention.
Embodiment
This efficient three-dimensional face acquisition system comprises the steps: that (1) hardware device is built; (2) software algorithm process synthesis.
Preferably, step (1) comprising: choose 1 depth camera (Kinect as Microsoft) and be fixed on graphoscope upper edge, and it be connected with computing machine.
Preferably, step (2) comprises algoritic module as shown in Figure 1: (2.1) initial three-dimensional head Model Reconstruction; (2.2) facial modeling; (2.3) radial basis function coefficient is estimated; (2.4) three-dimensional head model optimization reconstruct.
Preferably, the target of step (2.1) utilizes the front face image of colour TV camera collection to recover to meet animation to drive the level and smooth head approximate three-dimensional model required.First build the three-dimensional head database that has the above scale of 100 people specifically, three-dimensional information can be obtained by laser or structured light technique scanning, and each sample standard deviation in storehouse comprises three-dimensional structure and colouring information.Then face data alignment is carried out to each the sample face optical flow method in storehouse, thus the shape of face and texture information can represent with following vector respectively:
F= ( X 1, Y 1, Z 1,..., X n , Y n , Z n ) T
T= ( R 1, G 1, B 1,..., R n , G n , B n ) T
Suppose altogether to comprise in storehouse mindividual sample, then for face to be reconstructed, it can be stated as:
Wherein .Linear combination coefficient can be estimated to obtain with 3D Deformable models (3D Morphable Model), specifically can with reference to V Blanz, T Vetter. A Morphable Model For The Synthesis Of 3D Faces. SIGGRAPH, 1999. thus obtain the initial three-dimensional model of head, this three-dimensional model three-dimensional grid model represents, it comprises the texture value at annexation between summit, summit and each summit place, and the annexation between this three-dimensional grid model summit can meet the requirement that animation drives.
Preferably, the semantic information that the grid vertex of initial three-dimensional model that step (2.2) obtains according to step (2.1) projects on the plane of delineation in conjunction with it is determined lindividual human face characteristic point, these unique points are generally distributed in the angle point and fringe region that front face brightness changes greatly, as regions such as eyes outline, pupil center, the wing of nose, lip outlines.Active shape model technology (Active Shape Model, ASM) is utilized to extract this in the colour image human face region of correspondence l(concrete grammar is see T Cootes in the position of individual human face characteristic point, C Taylor, D Cooper, J Graham. Active shape models-their training and application. Computer Vision and Image Understanding, 1995,61 (1): 38-59.).
Preferably, the human face characteristic point that step (2.3) first obtains according to step (2.2) utilizes the corresponding relation between depth image and color image pixel to calculate a certain unique point icorresponding three-dimensional coordinate p i '.We suppose this unique point isummit three-dimensional coordinate in initial head three-dimensional model is p i .The target of this step to set up a radial function s( x), make arbitrarily p i , all have p i '= s( p i ).We suppose to have lindividual human face characteristic point, then we define:
(1)
Wherein || x- p i || for point xarrive p i euclidean distance, Φ () is Gaussian bases, namely , c i for the coefficient that basis function is corresponding, mx+ tfor affine part, be wherein the matrix of 3 × 3, tit is the vector of 3 × 1.Our target calculates c i , mwith t, this can pass through will lindividual human face characteristic point solves linear system system of equations after substituting into (1) formula and obtains.
Preferably, step (2.4) is for all in initial head three-dimensional model neach summit on individual summit p k all calculate according to (1) formula having calculated coefficient in step (2.3) s( p k ), readjust nthe three-dimensional coordinate position on individual summit, thus obtain final three-dimensional head model.
The above; it is only preferred embodiment of the present invention; not any pro forma restriction is done to the present invention, every above embodiment is done according to technical spirit of the present invention any simple modification, equivalent variations and modification, all still belong to the protection domain of technical solution of the present invention.

Claims (8)

1. an easy three-dimensional head model generation system, it is characterized in that can under the general room environment of illumination condition not being strict with reconstructed head three-dimensional model easily, the three-dimensional model obtained may be used for the application such as virtual reality, individual character animation.
2. a kind of easy three-dimensional head model generation system according to claim 1, it is characterized in that, described system utilizes depth transducer equipment, it is characterized in that:
(1) depth transducer equipment comprises synchronous colour TV camera;
(2) depth transducer equipment provides the corresponding relation between depth image and color image pixel.
3. a kind of easy three-dimensional head model generation system according to claim 1, it is characterized in that, data acquisition is similar to the process of taking pictures and completes instantaneously.
4. depth transducer equipment according to claim 2, is characterized in that, can adopt the RealSense sensor of Microsoft's Kinect sensor or Intel.
5. a kind of easy three-dimensional head model generation system according to claim 1, it is characterized in that, the software section of described system comprises following steps:
(1) front face image of colour TV camera collection is utilized to rebuild initial three-dimensional head model;
(2) in front face image, human face characteristic point is extracted;
(3) depth image obtained in conjunction with initial three-dimensional head model, human face characteristic point and depth transducer estimates the coefficient of radial basis function;
(4) based on the initial three-dimensional head model optimization reconstruct of radial basis function.
6. the front face image of colour TV camera collection that utilizes according to claim 5 rebuilds initial three-dimensional head model step, it is characterized in that, this initial three-dimensional head model adopts the technology (3D Morphable Model) based on deformation model to obtain.
7. according to claim 5ly in front face image, extract human face characteristic point step, it is characterized in that, the human face characteristic point position in coloured image is extracted by active shape model technology (Active Shape Model, ASM) and is obtained.
8. software algorithm part according to claim 5, is characterized in that, is optimized adjustment, obtains three-dimensional head model more true to nature by the depth information of depth map data to initial three-dimensional head model.
CN201410752439.XA 2014-12-11 2014-12-11 Handy three-dimensional head model generation system Pending CN104376599A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410752439.XA CN104376599A (en) 2014-12-11 2014-12-11 Handy three-dimensional head model generation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410752439.XA CN104376599A (en) 2014-12-11 2014-12-11 Handy three-dimensional head model generation system

Publications (1)

Publication Number Publication Date
CN104376599A true CN104376599A (en) 2015-02-25

Family

ID=52555488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410752439.XA Pending CN104376599A (en) 2014-12-11 2014-12-11 Handy three-dimensional head model generation system

Country Status (1)

Country Link
CN (1) CN104376599A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107256375A (en) * 2017-01-11 2017-10-17 西南科技大学 Human body sitting posture monitoring method before a kind of computer
CN107507269A (en) * 2017-07-31 2017-12-22 广东欧珀移动通信有限公司 Personalized three-dimensional model generating method, device and terminal device
CN108628448A (en) * 2018-04-12 2018-10-09 Oppo广东移动通信有限公司 Bright screen method, apparatus, mobile terminal and storage medium
CN108648280A (en) * 2018-04-25 2018-10-12 深圳市商汤科技有限公司 virtual role driving method and device, electronic equipment and storage medium
CN109064551A (en) * 2018-08-17 2018-12-21 联想(北京)有限公司 The information processing method and device of electronic equipment
CN109949412A (en) * 2019-03-26 2019-06-28 腾讯科技(深圳)有限公司 A kind of three dimensional object method for reconstructing and device
US10789784B2 (en) 2018-05-23 2020-09-29 Asustek Computer Inc. Image display method, electronic device, and non-transitory computer readable recording medium for quickly providing simulated two-dimensional head portrait as reference after plastic operation
CN112561784A (en) * 2020-12-17 2021-03-26 咪咕文化科技有限公司 Image synthesis method, image synthesis device, electronic equipment and storage medium
US11120624B2 (en) 2018-05-23 2021-09-14 Asustek Computer Inc. Three-dimensional head portrait generating method and electronic device
CN115294301A (en) * 2022-08-11 2022-11-04 广州沃佳科技有限公司 Head model construction method, device, equipment and medium based on face image

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107256375A (en) * 2017-01-11 2017-10-17 西南科技大学 Human body sitting posture monitoring method before a kind of computer
CN107507269A (en) * 2017-07-31 2017-12-22 广东欧珀移动通信有限公司 Personalized three-dimensional model generating method, device and terminal device
CN108628448A (en) * 2018-04-12 2018-10-09 Oppo广东移动通信有限公司 Bright screen method, apparatus, mobile terminal and storage medium
US11537696B2 (en) 2018-04-12 2022-12-27 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and apparatus for turning on screen, mobile terminal and storage medium
CN108648280A (en) * 2018-04-25 2018-10-12 深圳市商汤科技有限公司 virtual role driving method and device, electronic equipment and storage medium
CN108648280B (en) * 2018-04-25 2023-03-31 深圳市商汤科技有限公司 Virtual character driving method and device, electronic device and storage medium
US10789784B2 (en) 2018-05-23 2020-09-29 Asustek Computer Inc. Image display method, electronic device, and non-transitory computer readable recording medium for quickly providing simulated two-dimensional head portrait as reference after plastic operation
US11120624B2 (en) 2018-05-23 2021-09-14 Asustek Computer Inc. Three-dimensional head portrait generating method and electronic device
CN109064551B (en) * 2018-08-17 2022-03-25 联想(北京)有限公司 Information processing method and device for electronic equipment
CN109064551A (en) * 2018-08-17 2018-12-21 联想(北京)有限公司 The information processing method and device of electronic equipment
CN109949412A (en) * 2019-03-26 2019-06-28 腾讯科技(深圳)有限公司 A kind of three dimensional object method for reconstructing and device
CN112561784A (en) * 2020-12-17 2021-03-26 咪咕文化科技有限公司 Image synthesis method, image synthesis device, electronic equipment and storage medium
CN112561784B (en) * 2020-12-17 2024-04-09 咪咕文化科技有限公司 Image synthesis method, device, electronic equipment and storage medium
CN115294301A (en) * 2022-08-11 2022-11-04 广州沃佳科技有限公司 Head model construction method, device, equipment and medium based on face image

Similar Documents

Publication Publication Date Title
CN108154550B (en) RGBD camera-based real-time three-dimensional face reconstruction method
CN104376599A (en) Handy three-dimensional head model generation system
US10846903B2 (en) Single shot capture to animated VR avatar
CN106803267B (en) Kinect-based indoor scene three-dimensional reconstruction method
CN105427385B (en) A kind of high-fidelity face three-dimensional rebuilding method based on multilayer deformation model
CN104992441B (en) A kind of real human body three-dimensional modeling method towards individualized virtual fitting
ES2693028T3 (en) System and method for deriving accurate body size measurements from a sequence of 2D images
US9361723B2 (en) Method for real-time face animation based on single video camera
CN105210093B (en) Apparatus, system and method for capturing and displaying appearance
CN104376596B (en) A kind of three-dimensional scene structure modeling and register method based on single image
US20150243035A1 (en) Method and device for determining a transformation between an image coordinate system and an object coordinate system associated with an object of interest
CN109816784B (en) Method and system for three-dimensional reconstruction of human body and medium
US11790610B2 (en) Systems and methods for selective image compositing
WO2019219014A1 (en) Three-dimensional geometry and eigencomponent reconstruction method and device based on light and shadow optimization
WO2021078179A1 (en) Image display method and device
CN109903377A (en) A kind of three-dimensional face modeling method and system without phase unwrapping
Khilar et al. 3D image reconstruction: Techniques, applications and challenges
CN108629828B (en) Scene rendering transition method in the moving process of three-dimensional large scene
Remondino et al. Human figure reconstruction and modeling from single image or monocular video sequence
Azevedo et al. An augmented reality virtual glasses try-on system
CN112365589B (en) Virtual three-dimensional scene display method, device and system
CN109218706A (en) A method of 3 D visual image is generated by single image
Ogawa et al. Occlusion handling in outdoor augmented reality using a combination of map data and instance segmentation
Straka et al. Rapid skin: estimating the 3D human pose and shape in real-time
KR20160039447A (en) Spatial analysis system using stereo camera.

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150225

WD01 Invention patent application deemed withdrawn after publication