CN107945267A - A kind of method and apparatus for human face three-dimensional model grain table - Google Patents

A kind of method and apparatus for human face three-dimensional model grain table Download PDF

Info

Publication number
CN107945267A
CN107945267A CN201711328399.6A CN201711328399A CN107945267A CN 107945267 A CN107945267 A CN 107945267A CN 201711328399 A CN201711328399 A CN 201711328399A CN 107945267 A CN107945267 A CN 107945267A
Authority
CN
China
Prior art keywords
texture
tri patch
face
camera
dimensional model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711328399.6A
Other languages
Chinese (zh)
Other versions
CN107945267B (en
Inventor
傅可人
荆海龙
熊伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Chuanda Zhisheng Software Co Ltd
Wisesoft Co Ltd
Original Assignee
Sichuan Chuanda Zhisheng Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Chuanda Zhisheng Software Co Ltd filed Critical Sichuan Chuanda Zhisheng Software Co Ltd
Priority to CN201711328399.6A priority Critical patent/CN107945267B/en
Publication of CN107945267A publication Critical patent/CN107945267A/en
Application granted granted Critical
Publication of CN107945267B publication Critical patent/CN107945267B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a kind of method and apparatus for human face three-dimensional model grain table, the more natural transition of the result after grain table is enabled to, avoids the uneven unnatural phenomenon of transition that wing of nose both sides easily occur during texture mapping.This method includes:Input human face three-dimensional model data;Input the face texture image under several different visual angles;Color correction is carried out to the face texture image;Calculate observability of each tri patch in each texture camera;Calculate texture weights of each tri patch relative to each texture camera;Correct texture weights of the tri patch on human face three-dimensional model in presumptive area relative to positive face texture camera;Smooth and normalized will be carried out relative to the texture weights of each texture camera;Grain table is carried out relative to the texture weights of each texture camera according to each tri patch, and obtains face three-D grain.

Description

A kind of method and apparatus for human face three-dimensional model grain table
Technical field
The present invention relates to technical field of computer vision, more particularly to a kind of side for human face three-dimensional model grain table Method and equipment.
Background technology
Three-dimensional modeling all there is important research to anticipate in industrial design, Art Design, architectural design, measurement in space etc. Justice and application value.Wherein, the three-dimensional modeling of face builds storehouse, three-dimensional face identification etc. in biological characteristic field including three-dimensional face Extensive application.In general, the three-dimensional modeling of object includes two parts:Surface three dimension model modeling and texture mapping.Before Person obtains extensive concern in computer vision field, and the latter, i.e. texture mapping are one important but opposite by researcher The direction of ignorance.Texture mapping, refers to how to calculate preferably texture image and textures to threedimensional model surface.Under various visual angles Grain table to three-dimensional modeling and its renders most important with mapping, its result quality directly affects the sense of reality of threedimensional model.
The essence of grain table problem is actually to consider how split texture fragment, because in threedimensional model and texture When known to camera parameter, model three-dimensional surface exists with the coordinate on texture camera texture image to be corresponded;In other words, texture Texture fragment on image can be on back mapping to threedimensional model.Under multi-texturing camera, the texture from multiple texture cameras Fragment is mapped to identical three-dimensional surface, and then produces information overlap, and these texture fragments are likely to have different light According to characteristics such as, shade, reflections, therefore texture fragment need to be merged.
Related introduction is done to the method and patent that are intended to grain table and textures both at home and abroad below.Lempitsky et al. " Seamless Mosaicing of Image-Based Texture Maps " (the texture patches based on image were proposed in 2007 Figure it is seamless spliced), using " optimal to inlay " method carry out grain table.It is split using image and is divided into body surface not Same region, then eliminate what is brought by " optimal to inlay " by markov random file (Markov Random Field, MRF) Texture seam, implementation model surface color and polish it is smooth.But the geometrical body with complex topology generally hardly results in desired smooth Fusion parameters so that still there is a small amount of fine crack on texture model surface.Gal et al. proposed " Seamless Montage in 2010 For Texturing Models " (the seamless montage for being used for texture model), allow to consider that texture triangle reflects using MRF Penetrate the situation of inaccuracy.Although it, which allows texture fragment to map, deviation occurs, optimization of this method to MRF is quite time-consuming, and It is not practical enough in the case of texture camera texture mapping is accurate.In addition, above two existing method is all non-expert to be directed to ring Face grain table under the uneven illumination situation of border.
The Chinese patent application of Application No. 201511025408.5 discloses that " a kind of three-dimensional full face shines high in texture camera The full face grain table method of fidelity ", although it may realize fidelity grain table in the ideal case, what this method was calculated Texture weights are not suitable in real natural environment that there are surface the situation such as to block, and it calculates the grain table of all areas When use linear weighted function all the time, in texture camera uneven illumination (such as flash lamp human face brighter and when background is dark), Face side is readily incorporated artificial shade, causes fusion results the problem of transition is uneven, unnatural occur.
The content of the invention
An object of the present invention at least that, for how overcoming the above-mentioned problems of the prior art, there is provided a kind of For the method and apparatus of human face three-dimensional model grain table, the more natural transition of the result after grain table is enabled to, is kept away The uneven unnatural phenomenon of transition that wing of nose both sides easily occur when exempting from texture mapping.
To achieve these goals, the technical solution adopted by the present invention includes following aspects.
A kind of method for human face three-dimensional model grain table, it includes:
Human face three-dimensional model data are inputted, obtain the three-dimensional point cloud index data of each tri patch of model surface;Input Face texture image under several different visual angles, according to corresponding texture camera parameter acquiring human face three-dimensional model in face texture Mapping data on image;Color correction is carried out to the face texture image;Each tri patch is calculated in each texture phase Observability in machine;Calculate texture weights of each tri patch relative to each texture camera;Correct on human face three-dimensional model Tri patch in presumptive area relative to positive face texture camera texture weights;It will be weighed relative to the texture of each texture camera Value carries out smooth and normalized;Texture is carried out according to each tri patch relative to the texture weights of each texture camera to melt Close, and obtain face three-D grain.
Preferably, the human face three-dimensional model data include three dimensional point cloud in human face three-dimensional model;The method into One step includes carrying out three-dimensional point cloud according to the three-dimensional coordinate on three vertex of each tri patch on human face three-dimensional model surface Rebuild, obtain the three-dimensional point cloud index data of each tri patch.
Preferably, the texture camera parameter includes three-dimensional seat of each texture camera optical center relative to human face three-dimensional model Mark.
Preferably, the color correction includes carrying out each width in face texture image white balance and brightness normalization Processing, makes the aberration of the face texture camera texture image under multiple and different visual angles be less than predetermined threshold value, and brightness is consistent.
Preferably, the definite texture weights include:Each tri patch is traveled through, calculates the normal vector of tri patch, and Calculate the normal vector and the angle at tri patch center to each texture camera optical center connection, wherein, subscript j is represented j-th Tri patch, subscript i represent corresponding texture camera, and the quantity of texture camera is more than or equal to three;
To j-th of tri patch, according to angleThe texture power of different texture camera i is corresponded to corresponding visibility processing Value
Preferably, the presumptive area is:Put down determined by human face three-dimensional model neutrality line and positive face texture camera optical axis Face both sides are respectively apart from the region for being less than or equal to r, r=50mm;
The amendment includes:Tri patch in the presumptive area is put relative to the texture weights of positive face texture camera For 1, also, the tri patch is set to 0 relative to the weights of other texture cameras, wherein, following table j represents to be located at the fate The correspondence sequence number of tri patch in domain.
Preferably, the smoothing processing includes:Each tri patch is traveled through, determines tri patch in human face three-dimensional model The tri patch set M of neighbour, and according to formulaTo update texture weightsWherein, | M | it is neighbour three The number of edged surface piece;
The normalized includes:Each tri patch is traveled through, by formulaNormalizing is carried out to texture weights Change is handled so that the sum of weights of the corresponding texture camera of each tri patch are 1;.
Preferably, the grain table includes:Each tri patch is traveled through, by tri patch at each texture camera visual angle Lower carry out affine transformation, texture triangle is determined according to the face texture image that corresponding texture camera obtainsAnd according to formulaTo texture triangleSummation is weighted to obtain the texture triangle after fusionAnd by after fusion Texture triangleIt is mapped on the corresponding tri patch of human face three-dimensional model, obtains face three-D grain.
Preferably, the observability for calculating each tri patch in each texture camera includes:Structure two dimension record Matrix simultaneously initializes the value of each element as infinity;Calculate two-dimensional projection three of each tri patch in each texture camera It is angular;The central point of each tri patch is calculated to the distance of each texture camera optical center;Arrived according to each tri patch center The distance of each texture camera optical center and the value of the two-dimentional record matrix element of two-dimensional projection's triangle renewal;Square is recorded according to two dimension The value of array element element determines observability of each tri patch in each texture camera.
A kind of equipment for human face three-dimensional model grain table, it includes at least one processor, and with it is described extremely The memory of few processor communication connection;The memory storage has the finger that can be performed by least one processor Order, described instruction are performed by least one processor so that at least one processor be able to carry out it is foregoing either one Method;The database server is used to store livewire work experience database.
In conclusion by adopting the above-described technical solution, the present invention at least has the advantages that:
It is weight computing with robust, smooth by image preprocessing, color is carried out to various visual angles texture camera image and is rectified Just, with reference to the observability of threedimensional model surface tri patch, the texture of optimal texture camera is taken to carry out textures so that grain table The more natural transition of result afterwards, and specially treated is carried out to the face intermediate region at the position such as including nose, directly take just The texture image of face texture camera carries out textures, and the transition that wing of nose both sides easily occur when avoiding texture mapping is uneven unnatural Phenomenon.
Brief description of the drawings
Fig. 1 is the flow diagram of the method for human face three-dimensional model grain table according to embodiments of the present invention.
Fig. 2 is human face three-dimensional model schematic diagram according to embodiments of the present invention.
Fig. 3 is various visual angles texture camera according to embodiments of the present invention and corresponding texture image schematic diagram.
Fig. 4 is that the normal vector of tri patch according to embodiments of the present invention and tri patch center to texture camera optical center connect The angle schematic diagram of line.
Fig. 5 is presumptive area schematic diagram on human face three-dimensional model according to embodiments of the present invention.
Fig. 6 is the face three-D grain exemplary plot after fusion according to embodiments of the present invention.
Fig. 7 is the flow of the observability of each tri patch of calculating according to embodiments of the present invention in each texture camera Schematic diagram.
Fig. 8 is the structure diagram of the equipment for human face three-dimensional model grain table according to embodiments of the present invention.
Embodiment
With reference to the accompanying drawings and embodiments, the present invention will be described in further detail, so that the purpose of the present invention, technology Scheme and advantage are more clearly understood.It should be appreciated that specific embodiment described herein is only to explain the present invention, and do not have to It is of the invention in limiting.
The method that the embodiment of the present invention is provided is included in from based on image information (including binocular vision, depth survey Deng) three-dimensional reconstruction means in obtain human face three-dimensional model after, carry out the face three-D grain textures of true nature.This method profit Corrected with various visual angles texture camera image color, with reference to the observability of threedimensional model surface tri patch, while calculate triangular facet The angle of the normal line vector of piece and dough sheet to various visual angles texture camera optical center connection, to integrate the texture power determined needed for fusion Value.The texture weights of middle positive face texture camera are modified afterwards, and it is smooth to texture weights progress three dimensions, to disappear Except the transition trace of texture splicing.
Fig. 1 shows the flow signal of the method according to an embodiment of the invention for human face three-dimensional model grain table Figure.This method comprises the following steps:
Step 101:Human face three-dimensional model data are inputted, obtain the three-dimensional point cloud index number of each tri patch of model surface According to
Wherein, human face three-dimensional model (as shown in Figure 2) can be obtained by reading the human face three-dimensional model prestored, Or individually perform face three-dimensional surface modeling process and establish.The human face three-dimensional model data of input include human face three-dimensional model Middle three dimensional point cloud, such as the D coordinates value each put and the total quantity at point cloud midpoint etc..Further can be according to people The three-dimensional coordinate on three vertex of each tri patch rebuilds three-dimensional point cloud on face three-dimensional model surface, obtains each three The three-dimensional point cloud index data of edged surface piece.
Step 102:The face texture image under several different visual angles is inputted, according to corresponding texture camera parameter acquiring people Mapping data of the face three-dimensional model on face texture image
It is, for example, possible to use I0, I1, I-1... to represent several face texture images inputted, these images are by right The more texture cameras answered obtain, wherein, subscript " 0 " represents the positive face texture camera of face face, and "+1 " represents positive face line First texture camera of camera right is managed, " -1 " represents the class successively such as first texture camera of its left, "+2 ", " -2 " Push away, to represent more face texture images and corresponding more texture cameras.Fig. 3 is shown by three texture cameras to distinguish The schematic diagram of the face texture image under three width different visual angles is obtained, hereafter the embodiment of the present invention is carried out based on this detailed Describe in detail bright.Wherein, texture camera parameter includes each texture camera optical center O-1,O0,O+1Relative to the three-dimensional of human face three-dimensional model Coordinate.The three-dimensional point that can be obtained on threedimensional model according to texture camera parameter and three-dimensional point cloud is mapped to each texture camera line The two-dimensional space coordinate on image is managed, and can be further according to face texture image I0, I1, I-1Obtain three on threedimensional model Dimension point is mapped to the texel value after two-dimensional space coordinate.But can be in following steps obtain texel value the step of Performed after rapid.
Step 103:Color correction is carried out to face texture image
Specifically, can be to face texture image I0, I1, I-1In each width carry out at white balance and brightness normalization Reason, makes the aberration of the face texture camera texture image under multiple and different visual angles be less than predetermined threshold value (for example, 0.041), and bright Degree is consistent (for example, being less than 2 with the difference of average brightness).
Step 104:Calculate observability of each tri patch in each texture camera
Wherein it is possible to by calculating each tri patch center to the distance of each texture camera optical center, and each three Two-dimensional projection triangle of the edged surface piece center in each texture camera obtains each tri patch in each texture camera Observability, wherein, subscript j j-th of tri patch of expression, the corresponding texture camera numbering of subscript i expressions (i ∈ 0, -1, 1,-2,2,...}).For example, can beAssignment,Equal to 1 represent as it can be seen thatRepresented equal to 0 invisible.Fig. 7 will hereafter be passed through Embodiment be described in detail observabilityCalculation procedure.
Step 105:Calculate texture weights of each tri patch relative to each texture camera
Can be and right according to the normal vector of tri patch and the angle at tri patch center to texture camera optical center connection The observability answered determines texture weights.Specifically, each tri patch is traveled through first, calculates the normal vector of tri patch, and Calculate the normal vector and the angle at tri patch center to each texture camera optical center connection(as shown in Figure 4), wherein, subscript J j-th of tri patch of expression, the corresponding texture camera of subscript i expressions (i ∈ 0, -1,1, -2,2 ... }).Further, it is right J-th of tri patch, according to angleDifferent texture camera i is corresponded to corresponding visibility processing Texture weights
Step 106:Correct line of the tri patch on human face three-dimensional model in presumptive area relative to positive face texture camera Manage weights
Wherein, presumptive area is:Human face three-dimensional model neutrality line and plane both sides determined by positive face texture camera optical axis Distance is less than or equal to the region of r (for example, r=50mm) respectively, and area schematic is as shown in Figure 5.Amendment specifically includes:Will Tri patch in the region is set to 1 relative to the texture weights of positive face texture camera, such asAlso, by the triangular facet Piece is set to 0 relative to the weights of other texture cameras, such asRepresent to be located at the presumptive area Deng, wherein following table j The correspondence sequence number of interior tri patch.Especially, to by the three-dimensional system of coordinate of rotational correction, as shown in figure 5, wherein z Axis is middle texture camera optical axis direction, and x-axis is horizontal direction, and y-axis is the direction orthogonal with z-axis x-axis, then human face three-dimensional model It can be obtained along the neutrality line of texture camera optical axis direction by the average x coordinate for face three-dimensional model three-dimensional point cloud of asking for help.By with Upper step can make the center section (corresponding nose region) of three-dimensional face texture derive from the texture camera of face face.
Step 107:Smooth and normalized will be carried out relative to the texture weights of each texture camera
Wherein, smoothing processing includes:Each tri patch is traveled through, determines tri patch neighbour in human face three-dimensional model Tri patch set M, and according to formulaTo update texture weightsFor example, some dough sheet can be directed to, Find its 500 on threedimensional model neighbour tri patch set M, thus in the present embodiment M gesture | M |=500, and according to Preceding formula smooth grain weights.
Wherein, normalized includes:Each tri patch is traveled through, by formulaTexture weights are returned One change is handled so that the sum of weights of the corresponding texture camera of each tri patch are 1.By to texture weights carry out smoothly and Normalized, can effectively eliminate texture splicing trace.
Step 108:Grain table is carried out relative to the texture weights of each texture camera according to each tri patch, and is obtained Take face three-D grain
Specifically, grain table includes:Travel through each tri patch, by tri patch under each texture camera visual angle into Row affine transformation, texture triangle is determined according to the face texture image that corresponding texture camera obtainsAnd according to formulaTo texture triangleSummation is weighted to obtain the texture triangle after fusionAnd by after fusion Texture triangleIt is mapped on the corresponding tri patch of human face three-dimensional model, obtains face three-D grain, as a result such as Fig. 6 institutes Show.
In above-described embodiment, by carrying out color correction to texture image, the texture that can obtain multiple texture cameras The color zero deflection of image, and brightness is consistent, so as to improve the uniformity of follow-up grain table;By combining threedimensional model surface The observability of tri patch is integrated with normal line vector angle to be determined to merge required texture weights, and to middle positive face texture camera Texture weights correct, further to texture weights carry out three dimensions it is smooth, come eliminate texture splicing trace.Due to right The face intermediate region at position where nose etc. has carried out specially treated, that is, the texture image for choosing positive texture camera is pasted Figure, can avoid the uneven unnatural phenomenon of transition that wing of nose both sides easily occur during texture mapping, so as to obtain grain table Textures naturally face three-D grain true to nature.
Fig. 7 shows the observability of each tri patch of calculating according to embodiments of the present invention in each texture camera Detailed step, it specifically comprises the following steps, these steps can perform according to particular order, partly repeat or parallel Perform:
Step 201:The two-dimentional value for recording matrix and initializing each element of structure is infinity
Specifically, each texture camera i, the structure two dimension record square identical with corresponding texture image size are traveled through successively Battle array Ri, and by two dimension record matrix all elements value+ ∞ is initialized as, wherein, size is identical to refer to two dimension record matrix The quantity n of element is identical with the pixel quantity n of texture image and corresponds, and with the two dimension seat identical with respective pixel Mark.
Step 202:Calculate two-dimensional projection triangle of each tri patch in each texture camera
Each tri patch and each texture camera are traveled through successively, calculate two of j-th of tri patch in texture camera i Tie up projected triangle, it can be defined by three two-dimensional coordinate points.
Step 203:The central point of each tri patch is calculated to the distance of each texture camera optical center
Travel through each tri patch and each texture camera successively, calculate the central point of j-th of tri patch to texture phase The distance of machine i optical centers O
Step 204:According to the distance at each tri patch center to each texture camera optical center and two-dimensional projection's triangle The value of renewal two dimension record matrix element
Wherein, the update mode of renewal two dimension record matrix element value is:Each member of traversal two dimension record matrix successively Plain, each tri patch and each texture camera, if two dimension record matrix RiThe two-dimensional coordinate l of middle nth elementsnPositioned at two Tie up projected triangleIt is interior (i.e.) and tri patch center to texture camera optical center distanceIt is less thanThen make two Dimension record matrix element RnValue be equal to tri patch center to texture camera optical center distance, i.e.,
Step 205:According to two dimension record matrix element value determine each tri patch in each texture camera can Opinion property
Specifically, each element, each tri patch and each texture camera of two dimension record matrix are traveled through successively, if Two dimension record matrix RiThe two-dimensional coordinate l of middle nth elementsnPositioned at two-dimensional projection's triangleInterior and tri patch center is arrived The distance of texture camera optical centerIt is more thanThen the tri patch is invisible in the texture camera, it is seen that propertyDistance It is less than or equal toIt is then as it can be seen that observabilityIt is visible in each texture camera so as to obtain each tri patch Property
Fig. 8 shows a kind of equipment for human face three-dimensional model grain table according to embodiments of the present invention, i.e. electronics Equipment 310 (such as possessing the computer server of program perform function), it includes at least one processor 311, power supply 314, And the memory 312 and input/output interface 313 communicated to connect with least one processor 311;The memory 312 The instruction that can be performed by least one processor 311 is stored with, described instruction is held by least one processor 311 OK, so that at least one processor 311 is able to carry out the method disclosed in foregoing any embodiment;The input and output connect Mouth 313 can include display, keyboard, mouse and USB interface, for inputting human face three-dimensional model data;Power supply 314 is used In for electronic equipment 310 provide electric energy.
The detailed description of the above, the only specific embodiment of the invention, rather than limitation of the present invention.Correlation technique The technical staff in field is not in the case where departing from the principle and scope of the present invention, various replacements, modification and the improvement made It should all be included in the protection scope of the present invention.

Claims (10)

  1. A kind of 1. method for human face three-dimensional model grain table, it is characterised in that the described method includes:
    Human face three-dimensional model data are inputted, obtain the three-dimensional point cloud index data of each tri patch of model surface;Input several Face texture image under different visual angles, according to corresponding texture camera parameter acquiring human face three-dimensional model in face texture image On mapping data;Color correction is carried out to the face texture image;Each tri patch is calculated in each texture camera Observability;Calculate texture weights of each tri patch relative to each texture camera;Correct and make a reservation on human face three-dimensional model Tri patch in region relative to positive face texture camera texture weights;By relative to the texture weights of each texture camera into Capable smooth and normalized;Grain table is carried out relative to the texture weights of each texture camera according to each tri patch, And obtain face three-D grain.
  2. 2. according to the method described in claim 1, it is characterized in that, the human face three-dimensional model data include human face three-dimensional model Middle three dimensional point cloud;The method is further included is according to three vertex of each tri patch on human face three-dimensional model surface Three-dimensional coordinate three-dimensional point cloud is rebuild, obtain the three-dimensional point cloud index data of each tri patch.
  3. 3. according to the method described in claim 1, it is characterized in that, the texture camera parameter includes each texture camera optical center Relative to the three-dimensional coordinate of human face three-dimensional model.
  4. 4. according to the method described in claim 1, it is characterized in that, the color correction is included to every in face texture image One width carries out white balance and brightness normalized, makes the aberration of face texture camera texture image under multiple and different visual angles small In predetermined threshold value, and brightness is consistent.
  5. 5. according to the method described in claim 1, it is characterized in that, the definite texture weights include:Travel through each triangular facet Piece, calculates the normal vector of tri patch, and calculates the normal vector and tri patch center to each texture camera optical center connection AngleWherein, subscript j represents j-th of tri patch, and subscript i represents corresponding texture camera, and the quantity of texture camera is more than Or equal to three;
    To j-th of tri patch, according to angleThe texture weights of different texture camera i are corresponded to corresponding visibility processing
  6. 6. according to the method described in claim 5, it is characterized in that, the presumptive area is:Human face three-dimensional model neutrality line with Plane both sides are respectively apart from the region for being less than or equal to r, r=50mm determined by positive face texture camera optical axis;
    The amendment includes:Tri patch in the presumptive area is set to 1 relative to the texture weights of positive face texture camera, Also, the tri patch is set to 0 relative to the weights of other texture cameras, wherein, following table j is represented in the presumptive area Tri patch correspondence sequence number.
  7. 7. according to the method described in claim 5, it is characterized in that, the smoothing processing includes:Each tri patch is traveled through, really Determine the tri patch set M of tri patch neighbour in human face three-dimensional model, and according to formulaTo update line Manage weightsWherein, | M | it is the number of neighbour's tri patch;
    The normalized includes:Each tri patch is traveled through, by formulaPlace is normalized to texture weights Reason so that the sum of weights of the corresponding texture camera of each tri patch are 1;.
  8. 8. according to the method described in claim 5, it is characterized in that, the grain table includes:Each tri patch is traveled through, will Tri patch carries out affine transformation under each texture camera visual angle, the face texture image obtained according to corresponding texture camera Determine texture triangleAnd according to formulaTo texture triangleAfter summation is weighted to obtain fusion Texture triangleAnd by the texture triangle after fusionIt is mapped on the corresponding tri patch of human face three-dimensional model, obtains Face three-D grain.
  9. 9. according to the method described in claim 1, it is characterized in that, described calculate each tri patch in each texture camera Observability include:The two-dimentional value for recording matrix and initializing each element of structure is infinity;Each tri patch is calculated to exist Two-dimensional projection's triangle in each texture camera;Calculate the central point of each tri patch to each texture camera optical center away from From;According to the distance at each tri patch center to each texture camera optical center and two-dimensional projection's triangle renewal two dimension record square The value of array element element;Observability of each tri patch in each texture camera is determined according to the value of two dimension record matrix element.
  10. 10. a kind of equipment for human face three-dimensional model grain table, it is characterised in that the equipment includes at least one processing Device, and the memory being connected with least one processor communication;The memory storage has can be by described at least one The instruction that processor performs, described instruction is performed by least one processor, so that at least one processor can Method any one of perform claim requirement 1 to 9.
CN201711328399.6A 2017-12-13 2017-12-13 Method and equipment for fusing textures of three-dimensional model of human face Active CN107945267B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711328399.6A CN107945267B (en) 2017-12-13 2017-12-13 Method and equipment for fusing textures of three-dimensional model of human face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711328399.6A CN107945267B (en) 2017-12-13 2017-12-13 Method and equipment for fusing textures of three-dimensional model of human face

Publications (2)

Publication Number Publication Date
CN107945267A true CN107945267A (en) 2018-04-20
CN107945267B CN107945267B (en) 2021-02-26

Family

ID=61942882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711328399.6A Active CN107945267B (en) 2017-12-13 2017-12-13 Method and equipment for fusing textures of three-dimensional model of human face

Country Status (1)

Country Link
CN (1) CN107945267B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108682030A (en) * 2018-05-21 2018-10-19 北京微播视界科技有限公司 Face replacement method, device and computer equipment
CN109118578A (en) * 2018-08-01 2019-01-01 浙江大学 A kind of multiview three-dimensional reconstruction texture mapping method of stratification
CN109271923A (en) * 2018-09-14 2019-01-25 曜科智能科技(上海)有限公司 Human face posture detection method, system, electric terminal and storage medium
CN110153582A (en) * 2019-05-09 2019-08-23 清华大学 Welding scheme generation method, device and welding system
CN110232730A (en) * 2019-06-03 2019-09-13 深圳市三维人工智能科技有限公司 A kind of three-dimensional face model textures fusion method and computer-processing equipment
CN110458932A (en) * 2018-05-07 2019-11-15 阿里巴巴集团控股有限公司 Image processing method, device, system, storage medium and image scanning apparatus
CN110569768A (en) * 2019-08-29 2019-12-13 四川大学 construction method of face model, face recognition method, device and equipment
WO2020063139A1 (en) * 2018-09-26 2020-04-02 北京旷视科技有限公司 Face modeling method and apparatus, electronic device and computer-readable medium
CN111179210A (en) * 2019-12-27 2020-05-19 浙江工业大学之江学院 Method and system for generating texture map of face and electronic equipment
CN111192223A (en) * 2020-01-07 2020-05-22 腾讯科技(深圳)有限公司 Method, device and equipment for processing face texture image and storage medium
CN111340959A (en) * 2020-02-17 2020-06-26 天目爱视(北京)科技有限公司 Three-dimensional model seamless texture mapping method based on histogram matching
CN111369651A (en) * 2018-12-25 2020-07-03 浙江舜宇智能光学技术有限公司 Three-dimensional expression animation generation method and system
CN111710036A (en) * 2020-07-16 2020-09-25 腾讯科技(深圳)有限公司 Method, device and equipment for constructing three-dimensional face model and storage medium
CN112257657A (en) * 2020-11-11 2021-01-22 网易(杭州)网络有限公司 Face image fusion method and device, storage medium and electronic equipment
CN112402973A (en) * 2020-11-18 2021-02-26 芯勍(上海)智能化科技股份有限公司 Model detail judgment method, terminal device and computer readable storage medium
CN113824879A (en) * 2021-08-23 2021-12-21 成都中鱼互动科技有限公司 Scanning device and normal map generation method
CN113838176A (en) * 2021-09-16 2021-12-24 网易(杭州)网络有限公司 Model training method, three-dimensional face image generation method and equipment
CN114742956A (en) * 2022-06-09 2022-07-12 腾讯科技(深圳)有限公司 Model processing method, device, equipment and computer readable storage medium
WO2024082440A1 (en) * 2022-10-20 2024-04-25 中铁第四勘察设计院集团有限公司 Three-dimensional model generation method and apparatus, and electronic device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080018407A (en) * 2006-08-24 2008-02-28 한국문화콘텐츠진흥원 Computer-readable recording medium for recording of 3d character deformation program
CN101958008A (en) * 2010-10-12 2011-01-26 上海交通大学 Automatic texture mapping method in three-dimensional reconstruction of sequence image
CN105550992A (en) * 2015-12-30 2016-05-04 四川川大智胜软件股份有限公司 High fidelity full face texture fusing method of three-dimensional full face camera
CN107346425A (en) * 2017-07-04 2017-11-14 四川大学 A kind of three-D grain photographic system, scaling method and imaging method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080018407A (en) * 2006-08-24 2008-02-28 한국문화콘텐츠진흥원 Computer-readable recording medium for recording of 3d character deformation program
CN101958008A (en) * 2010-10-12 2011-01-26 上海交通大学 Automatic texture mapping method in three-dimensional reconstruction of sequence image
CN105550992A (en) * 2015-12-30 2016-05-04 四川川大智胜软件股份有限公司 High fidelity full face texture fusing method of three-dimensional full face camera
CN107346425A (en) * 2017-07-04 2017-11-14 四川大学 A kind of three-D grain photographic system, scaling method and imaging method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
宗智勇: ""多视角三维人脸建模的关键技术研究"", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
魏衍君等: ""一种改进的人脸纹理映射方法"", 《计算机仿真》 *

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458932A (en) * 2018-05-07 2019-11-15 阿里巴巴集团控股有限公司 Image processing method, device, system, storage medium and image scanning apparatus
CN110458932B (en) * 2018-05-07 2023-08-22 阿里巴巴集团控股有限公司 Image processing method, device, system, storage medium and image scanning apparatus
CN108682030A (en) * 2018-05-21 2018-10-19 北京微播视界科技有限公司 Face replacement method, device and computer equipment
CN109118578A (en) * 2018-08-01 2019-01-01 浙江大学 A kind of multiview three-dimensional reconstruction texture mapping method of stratification
CN109271923A (en) * 2018-09-14 2019-01-25 曜科智能科技(上海)有限公司 Human face posture detection method, system, electric terminal and storage medium
US11625896B2 (en) 2018-09-26 2023-04-11 Beijing Kuangshi Technology Co., Ltd. Face modeling method and apparatus, electronic device and computer-readable medium
WO2020063139A1 (en) * 2018-09-26 2020-04-02 北京旷视科技有限公司 Face modeling method and apparatus, electronic device and computer-readable medium
CN111369651A (en) * 2018-12-25 2020-07-03 浙江舜宇智能光学技术有限公司 Three-dimensional expression animation generation method and system
CN110153582A (en) * 2019-05-09 2019-08-23 清华大学 Welding scheme generation method, device and welding system
CN110153582B (en) * 2019-05-09 2020-05-19 清华大学 Welding scheme generation method and device and welding system
CN110232730B (en) * 2019-06-03 2024-01-19 深圳市三维人工智能科技有限公司 Three-dimensional face model mapping fusion method and computer processing equipment
CN110232730A (en) * 2019-06-03 2019-09-13 深圳市三维人工智能科技有限公司 A kind of three-dimensional face model textures fusion method and computer-processing equipment
CN110569768A (en) * 2019-08-29 2019-12-13 四川大学 construction method of face model, face recognition method, device and equipment
CN110569768B (en) * 2019-08-29 2022-09-02 四川大学 Construction method of face model, face recognition method, device and equipment
CN111179210A (en) * 2019-12-27 2020-05-19 浙江工业大学之江学院 Method and system for generating texture map of face and electronic equipment
CN111179210B (en) * 2019-12-27 2023-10-20 浙江工业大学之江学院 Face texture map generation method and system and electronic equipment
CN111192223A (en) * 2020-01-07 2020-05-22 腾讯科技(深圳)有限公司 Method, device and equipment for processing face texture image and storage medium
CN111192223B (en) * 2020-01-07 2022-09-30 腾讯科技(深圳)有限公司 Method, device and equipment for processing face texture image and storage medium
CN111340959A (en) * 2020-02-17 2020-06-26 天目爱视(北京)科技有限公司 Three-dimensional model seamless texture mapping method based on histogram matching
CN111340959B (en) * 2020-02-17 2021-09-14 天目爱视(北京)科技有限公司 Three-dimensional model seamless texture mapping method based on histogram matching
CN111710036A (en) * 2020-07-16 2020-09-25 腾讯科技(深圳)有限公司 Method, device and equipment for constructing three-dimensional face model and storage medium
CN111710036B (en) * 2020-07-16 2023-10-17 腾讯科技(深圳)有限公司 Method, device, equipment and storage medium for constructing three-dimensional face model
CN112257657A (en) * 2020-11-11 2021-01-22 网易(杭州)网络有限公司 Face image fusion method and device, storage medium and electronic equipment
CN112257657B (en) * 2020-11-11 2024-02-27 网易(杭州)网络有限公司 Face image fusion method and device, storage medium and electronic equipment
CN112402973A (en) * 2020-11-18 2021-02-26 芯勍(上海)智能化科技股份有限公司 Model detail judgment method, terminal device and computer readable storage medium
CN113824879A (en) * 2021-08-23 2021-12-21 成都中鱼互动科技有限公司 Scanning device and normal map generation method
CN113838176B (en) * 2021-09-16 2023-09-15 网易(杭州)网络有限公司 Model training method, three-dimensional face image generation method and three-dimensional face image generation equipment
CN113838176A (en) * 2021-09-16 2021-12-24 网易(杭州)网络有限公司 Model training method, three-dimensional face image generation method and equipment
CN114742956B (en) * 2022-06-09 2022-09-13 腾讯科技(深圳)有限公司 Model processing method, device, equipment and computer readable storage medium
CN114742956A (en) * 2022-06-09 2022-07-12 腾讯科技(深圳)有限公司 Model processing method, device, equipment and computer readable storage medium
WO2024082440A1 (en) * 2022-10-20 2024-04-25 中铁第四勘察设计院集团有限公司 Three-dimensional model generation method and apparatus, and electronic device

Also Published As

Publication number Publication date
CN107945267B (en) 2021-02-26

Similar Documents

Publication Publication Date Title
CN107945267A (en) A kind of method and apparatus for human face three-dimensional model grain table
Bernardini et al. Building a digital model of Michelangelo's Florentine Pieta
CN104778694B (en) A kind of parametrization automatic geometric correction method shown towards multi-projection system
Zhou et al. Canny-vo: Visual odometry with rgb-d cameras based on geometric 3-d–2-d edge alignment
CN104463899B (en) A kind of destination object detection, monitoring method and its device
Johnson et al. Shape estimation in natural illumination
CN104330074B (en) Intelligent surveying and mapping platform and realizing method thereof
Beraldin et al. Virtualizing a Byzantine crypt by combining high-resolution textures with laser scanner 3D data
JP4785880B2 (en) System and method for 3D object recognition
CN110570503B (en) Method for acquiring normal vector, geometry and material of three-dimensional object based on neural network
CN108765548A (en) Three-dimensional scenic real-time reconstruction method based on depth camera
CN107358648A (en) Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image
CN105574812B (en) Multi-angle three-dimensional data method for registering and device
CN106709947A (en) RGBD camera-based three-dimensional human body rapid modeling system
CN106875444A (en) A kind of object localization method and device
CN106683068A (en) Three-dimensional digital image acquisition method and equipment thereof
JP2004127239A (en) Method and system for calibrating multiple cameras using calibration object
CN110490917A (en) Three-dimensional rebuilding method and device
CN106204701B (en) A kind of rendering method calculating indirect reference bloom based on light probe interpolation dynamic
CN110473221A (en) A kind of target object automatic scanning system and method
CN106485207A (en) A kind of Fingertip Detection based on binocular vision image and system
CN106500626A (en) A kind of mobile phone stereoscopic imaging method and three-dimensional imaging mobile phone
WO2017106562A1 (en) Devices, systems, and methods for measuring and reconstructing the shapes of specular objects by multiview capture
Beeler et al. Improved reconstruction of deforming surfaces by cancelling ambient occlusion
Arikan et al. Large-scale point-cloud visualization through localized textured surface reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant