CN109214350A - A kind of determination method, apparatus, equipment and the storage medium of illumination parameter - Google Patents

A kind of determination method, apparatus, equipment and the storage medium of illumination parameter Download PDF

Info

Publication number
CN109214350A
CN109214350A CN201811108374.XA CN201811108374A CN109214350A CN 109214350 A CN109214350 A CN 109214350A CN 201811108374 A CN201811108374 A CN 201811108374A CN 109214350 A CN109214350 A CN 109214350A
Authority
CN
China
Prior art keywords
face
sampled point
illumination
point
illumination parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811108374.XA
Other languages
Chinese (zh)
Other versions
CN109214350B (en
Inventor
董维山
梁兴仑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201811108374.XA priority Critical patent/CN109214350B/en
Publication of CN109214350A publication Critical patent/CN109214350A/en
Application granted granted Critical
Publication of CN109214350B publication Critical patent/CN109214350B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The embodiment of the invention discloses determination method, apparatus, equipment and the storage mediums of a kind of illumination parameter, which comprises at least one face sampled point in identification Target Photo in human face region;Pass through geometric method and/or training set method, the determining and matched reflection coefficient collection of the face sampled point;According to Pixel Information corresponding with the face sampled point and the reflection coefficient collection, illumination tensor collection corresponding with the face sampled point is calculated;According to the illumination tensor collection corresponding with the face sampled point, illumination parameter corresponding with the Target Photo is determined.The technical solution of the embodiment of the present invention is low to relevant device requirement, can sufficiently excavate the illumination parameter of picture, meets the simulation effect to complex illumination, to improve the accuracy and versatility of illumination parameter estimation.

Description

A kind of determination method, apparatus, equipment and the storage medium of illumination parameter
Technical field
The present embodiments relate to technical field of image processing more particularly to a kind of determination method, apparatus of illumination parameter, Equipment and storage medium.
Background technique
Illumination parameter estimates that (hereinafter referred to as illumination estimation) is exactly that Lighting information is obtained from picture, which can be with Applied to the technical fields such as recognition of face or enhancing display.In situation known to illumination parameter, can to dummy object polishing, Render light and shade variation and its shadow-casting on dummy object surface.Therefore, accurate illumination parameter estimation regards quantitative description Illumination feature in frequency or picture scene is very crucial.
Existing illumination estimation technology can be roughly divided into two classes: the method using non-face probe and the side using face probe Method, wherein can be broadly divided into following a few classes using the method for non-face probe: 1) there are probe objects known to geological information for requirement Body: such as regular cube or mirror surface reflection sphere.2) it requires manually to demarcate special reference area as probe: such as object edge Pixel region of boundary --- the normal in the plane of delineation, or vertical object shade line position in parallel in the image with shade. 3) panorama sketch sampling picture is used to carry out illumination estimation: using the panoramic pictures with illumination mark (position of sun) as training sample This, training depth convolutional neural networks carry out the parameters such as position, atmospheric pollution and the camera position of pre- shoot the sun.4) light source is believed It ceases the method directly acquired: such as directly acquiring the side of film studio internal light source position information using additional fish eye lens Method.5) using RGB-D camera while sampling depth information: detection mirror surface is anti-while completing positioning at the same time and build figure task Region is penetrated, estimates discrete light source position and intensity.6) simple method of estimation environment light mean intensity: use average luminance of pixels as estimating Meter.Method using face probe mainly includes following two categories: 1) method based on the positive face picture of face: with true illumination mark The illumination parameter model that the front face picture sampled pixel of note is described as training sample, training with linear equation.2) it is based on The method of face depth information: face 3D model data is acquired with the preposition depth camera of mobile phone, in answering with face tracking The simple method of estimation to principal light source and environmental light parameter can be supported in.
In the implementation of the present invention, the discovery prior art has following defects that requirement, and there are geological informations by inventor The scheme of known probe object needs to there must be the probe object in picture, anti-to probe object geometric shape and surface Penetrating characteristic has particular/special requirement, is difficult to meet in reality.It is required that manually demarcating scheme of the special reference area as probe, calculate Process needs artificial demarcating steps, and at present there is no algorithms to carry out automatically for this demarcating steps, so that entire scheme is difficult to certainly Dynamicization and large-scale application.It needs to sample the scheme that picture carries out illumination estimation using panorama sketch, is simply possible to use in outdoor scene, number It is high according to collection procurement cost, and method precision is lower.To the method that light source information is directly acquired, not to filmed video It is applicable in, it is high to acquisition equipment requirement, it needs to carry out hardware modification.It needs using RGB-D camera sampling depth information simultaneously Scheme, the high requirements on the equipment are not applicable to the ordinary video and non-depth camera of missing depth information.Need simple method of estimation ring The scheme of border light mean intensity is unable to satisfy the simulation effect to complex illumination, due to lacking the description for light source, or even connects Simple shade all can not be simulated and be rebuild.Method based on the positive face picture of face requires usage scenario to be only limitted to exist in picture The case where face positive face, model fails when face rotational angle is larger.Method based on face depth information, to equipment requirement Height, it is not applicable to the ordinary video and non-depth camera of missing depth information.
Summary of the invention
The embodiment of the present invention provides determination method, apparatus, equipment and the storage medium of a kind of illumination parameter, realizes and improves light According to the accuracy and versatility of time parameters estimation.
In a first aspect, the embodiment of the invention provides a kind of determination methods of illumination parameter, comprising:
Identify at least one face sampled point in Target Photo in human face region;
Pass through geometric method and/or training set method, the determining and matched reflection coefficient collection of the face sampled point;
According to Pixel Information corresponding with the face sampled point and the reflection coefficient collection, calculate and the people The corresponding illumination tensor collection of face sampled point;
According to the illumination tensor collection corresponding with the face sampled point, illumination corresponding with the Target Photo is determined Parameter.
Second aspect, the embodiment of the invention also provides a kind of determining devices of illumination parameter, comprising:
Sampled point identification module, for identification at least one face sampled point in Target Photo in human face region;
Reflection coefficient collection determining module, for passing through geometric method and/or training set method, the determining and face sampled point The reflection coefficient collection matched;
Illumination tensor collection computing module, for according to Pixel Information corresponding with the face sampled point and described Reflection coefficient collection calculates illumination tensor collection corresponding with the face sampled point;
Illumination parameter determining module, for according to the illumination tensor collection corresponding with the face sampled point, determine with The corresponding illumination parameter of the Target Photo.
The third aspect, the embodiment of the invention also provides a kind of computer equipment, the computer equipment includes:
One or more processors;
Storage device, for storing one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processing Device realizes the determination method of illumination parameter provided by any embodiment of the invention.
Fourth aspect, the embodiment of the invention also provides a kind of computer storage mediums, are stored thereon with computer program, The program realizes the determination method of illumination parameter provided by any embodiment of the invention when being executed by processor.
The embodiment of the present invention is by least one face sampled point in human face region in identification Target Photo, using geometry Method and/or training set method, the determining and matched reflection coefficient collection of face sampled point, further according to corresponding with face sampled point Pixel Information and reflection coefficient collection calculate illumination tensor collection corresponding with face sampled point, last basis and face sampled point Corresponding illumination tensor collection determines illumination parameter corresponding with Target Photo, solves to exist in existing illumination parameter estimation method Be difficult to meet lighting simulation effect and problem at high cost, can sufficiently excavate the illumination parameter of picture, relevant device is wanted It asks low, and meets the simulation effect to complex illumination, to improve the accuracy and versatility of illumination parameter estimation.
Detailed description of the invention
Fig. 1 is a kind of flow chart of the determination method for illumination parameter that the embodiment of the present invention one provides;
Fig. 2 a is a kind of flow chart of the determination method of illumination parameter provided by Embodiment 2 of the present invention;
Fig. 2 b is a kind of effect diagram that face sampled point is obtained according to key point provided by Embodiment 2 of the present invention;
Fig. 2 c is a kind of effect that face sampled point is obtained according to standard 3D faceform provided by Embodiment 2 of the present invention Schematic diagram;
Fig. 2 d is a kind of effect signal that illumination parameter is determined according to 3D face sampled point provided by Embodiment 2 of the present invention Figure;
Fig. 2 e is provided by Embodiment 2 of the present invention a kind of beaten based on determining illumination parameter virtual ads board in picture The effect diagram of light;
Fig. 3 is a kind of schematic diagram of the determining device for illumination parameter that the embodiment of the present invention three provides;
Fig. 4 is a kind of structural schematic diagram for computer equipment that the embodiment of the present invention four provides.
Specific embodiment
The present invention is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining the present invention rather than limiting the invention.
It also should be noted that only the parts related to the present invention are shown for ease of description, in attached drawing rather than Full content.It should be mentioned that some exemplary embodiments are described before exemplary embodiment is discussed in greater detail At the processing or method described as flow chart.Although operations (or step) are described as the processing of sequence by flow chart, It is that many of these operations can be implemented concurrently, concomitantly or simultaneously.In addition, the sequence of operations can be by again It arranges.The processing can be terminated when its operations are completed, it is also possible to have the additional step being not included in attached drawing. The processing can correspond to method, function, regulation, subroutine, subprogram etc..
Embodiment one
Fig. 1 is a kind of flow chart of the determination method for illumination parameter that the embodiment of the present invention one provides, and the present embodiment can fit The case where for accurately determining illumination parameter in picture, this method can be executed by the determining device of illumination parameter, the device It can be realized by the mode of software and/or hardware, and can be generally integrated in computer equipment.Correspondingly, as shown in Figure 1, This method includes following operation:
At least one face sampled point in S110, identification Target Photo in human face region.
Wherein, Target Photo is the picture it needs to be determined that illumination parameter.
It is understood that face is numerous videos (the new content form on especially short-sighted this internet of frequency) One of key element.Such as in all kinds of U.S. face, virtually try in the videos such as adornment, self-timer, main broadcaster's video and movie and television play works, face Clarity it is higher, can be used as ideal illumination estimation probe.Although face geometric shape is non-convex, and the 3D of different faces is special Sign is not quite similar.But it assume that the whole 3D characteristic variations variance of face is little to a certain extent.When given illumination parameter When, there are certain universal laws for light and shade shade, and this rule can be described by modeling.Compared to requiring video or field There are the known object of other particular forms, (standard ball or color and dimensions and the surface for such as meeting mirror-reflection are special in scape Property known to cube etc.), it is desirable that be easier to realize there are face.
As a result, in embodiments of the present invention, it needs that there are faces in Target Photo, and can be observed to be shone by light source on face Penetrate and the color light and shade variation generated and shade (face in general, including in picture is able to satisfy above-mentioned color light and shade and becomes Change and the requirement of shade, because in the present embodiment to above-mentioned condition and without detection, only guaranteeing to include people in Target Photo Face), to determine illumination parameter according to there are the Target Photos of face.When it needs to be determined that Target Photo illumination parameter When, it can be determined by least one face sampled point in human face region in Target Photo.
In an alternate embodiment of the present invention where, the Target Photo can be the frame figure in target video file Picture.
Wherein, target video file can be the video file for determining illumination parameter.Correspondingly, Target Photo can be with It is the wherein frame image in target video file.
In embodiments of the present invention, when Target Photo derives from target video file, it is determined that the illumination of Target Photo Parameter is that the illumination parameter of target video file has been determined.Illumination parameter determination side provided by the embodiment of the present invention as a result, Method can sufficiently excavate the illumination feature of picture and video, become the important feature dimension of description picture and video content, Advertisement implantation, visual classification, video scene are understood and numerous applications such as video recommendations have key value.
S120, pass through geometric method and/or training set method, the determining and matched reflection coefficient collection of the face sampled point.
Wherein, geometric method can be the method for determining reflection coefficient collection using physical geometry information, for example, utilizing light source The determination of the parameters such as incident light direction and normal and the matched reflection coefficient collection of face sampled point.Training set method can be according to face Sampled point establishes model, to describe light characteristics by model.Reflection coefficient collection can be the phase for solving reflective function Close function or relational expression etc..
It in embodiments of the present invention, can be each one face after obtaining at least one face sampled point in Target Photo Sampled point determines matched reflection coefficient collection by geometric method and/or training set method.
S130, according to Pixel Information corresponding with the face sampled point and the reflection coefficient collection, calculate with The corresponding illumination tensor collection of the face sampled point.
Wherein, Pixel Information can be the corresponding pixel value of face sampled point in Target Photo, such as face sampled point Pixel color, Pixel Information can be used as the target lighting effect observed.Illumination tensor collection can be to be adopted for reacting face The correlation function of the practical Lighting information of sampling point or relational expression etc..
Correspondingly, in embodiments of the present invention, can be observed obtaining Pixel Information according to face sampled point, in combination with true Fixed reflection coefficient collection calculates the corresponding illumination tensor of each face sampled point according to two known quantities as two known quantities Collection.
S140, the basis illumination tensor collection corresponding with the face sampled point, determination are corresponding with the Target Photo Illumination parameter.
In embodiments of the present invention, target figure can be determined come final according to the corresponding illumination tensor collection of each face sampled point The corresponding illumination parameter information of piece.The illumination tensor collection as corresponding to each face sampled point may be not fully consistent, because This needs to carry out each illumination tensor collection data processing modes such as () averaging to obtain final illumination parameter.
Illumination provided by the embodiment of the present invention determines that scheme is supported without complicated equipment, therefore to relevant device It is required that low, easy automation and large-scale application, versatility is stronger, and can accurately be estimated complex illumination, can satisfy The simulation effect of complex illumination.
In an alternate embodiment of the present invention where, according to the illumination tensor corresponding with the face sampled point Collection can also include: to obtain by the target video file after determining illumination parameter corresponding with the Target Photo At least illumination parameter determined respectively of two field pictures;According to setting data processing technique to each illumination parameter at Reason, obtains and the matched illumination parameter of target video file;Wherein, the illumination parameter include illumination incident direction and Intensity of illumination.
Wherein, setting data processing technique can be and be made for obtained data to be further processed and converted Method.For example, setting data processing technique can be and take mean value to data or be filtered, the embodiment of the present invention is not The concrete form of setting data processing technique is defined.
In embodiments of the present invention, if the method that illumination parameter is determined is applied to video light and shines feature extraction, for Video light is improved according to the accuracy of feature extraction, the multiple image in target video file can be analyzed and be corresponded to Illumination parameter.It is understood that under normal conditions, for short-sighted frequency, illumination parameter in video (including enter Penetrate direction and intensity of illumination) it is not changed.Therefore, the multiple image in target video file can be calculated with It determines its corresponding illumination parameter, and obtained multiple illumination parameters is handled according to setting data processing technique, thus It obtains and the matched final illumination parameter of target video file.Target video file is determined by the illumination parameter of multiple image Final illumination parameter can weaken or eliminate the error of single-frame images illumination parameter, to further increase target video The accuracy of file illumination parameter.
In an alternate embodiment of the present invention where, at according to setting data processing technique to each illumination parameter Reason may include at least one of following:
It votes each illumination parameter, seeks each mean parameter or median as literary with the target video The matched illumination parameter of part;
Use time domain sliding window seek the moving average of each illumination parameter as with the target video file Matched illumination parameter;And
Each illumination parameter is filtered using Kalman filter, is obtained and the target video file The illumination parameter matched.
In embodiments of the present invention, setting data processing technique include but is not limited to vote processing moving average processing with And filtering processing etc..Specifically, ballot processing mode, which can be the illumination parameter determined to multiple image, directly seeks average Or median, as with the matched illumination parameter of target video file.Moving average processing mode can be sliding using time domain Dynamic window seeks the average value in certain period of time to each illumination parameter, can be improved in time zone that there are illumination parameter changes The accuracy of the illumination parameter of the video file of change.Filtering processing mode can be using relevant filter, as Kalman filters Wave device is filtered each illumination parameter, obtains more robust illumination parameter estimation to filter out noise, and final filtration is obtained The illumination parameter arrived as with the matched illumination parameter of target video file.
The embodiment of the present invention is by least one face sampled point in human face region in identification Target Photo, using geometry Method and/or training set method, the determining and matched reflection coefficient collection of face sampled point, further according to corresponding with face sampled point Pixel Information and reflection coefficient collection calculate illumination tensor collection corresponding with face sampled point, last basis and face sampled point Corresponding illumination tensor collection determines illumination parameter corresponding with Target Photo, solves to exist in existing illumination parameter estimation method Be difficult to meet lighting simulation effect and problem at high cost, can sufficiently excavate the illumination parameter of picture, relevant device is wanted It asks low, and meets the simulation effect to complex illumination, to improve the accuracy and versatility of illumination parameter estimation.
Embodiment two
Fig. 2 a is a kind of flow chart of the determination method of illumination parameter provided by Embodiment 2 of the present invention, more than the present embodiment It states and is embodied based on embodiment, in the present embodiment, give at least one in identification Target Photo in human face region It a face sampled point, determination and the matched reflection coefficient collection of face sampled point and is respectively corresponded according to the face sampled point Pixel Information and the reflection coefficient collection, calculate the specific implementation side of corresponding with face sampled point illumination tensor collection Formula.Correspondingly, as shown in Figure 2 a, the method for the present embodiment may include:
At least one face sampled point in S210, identification Target Photo in human face region.
Correspondingly, S210 can specifically include following operation:
S211, the Target Photo is input to human-face detector, obtains the mesh that the human-face detector marks The crucial test point of at least one marked on a map in piece in human face region.
Wherein, human-face detector can be the human face detection tech for detecting human face region, such as Viola-Jones face Detector etc..Any technology that can be used for Face datection can be used as human-face detector, the embodiment of the present invention to this not It is limited.Crucial test point can be the key point of face for identification, such as the edges such as eyes, eyebrow and mouth of face Partial Feature point outstanding.
It should be noted that the embodiment of the present invention does not use the crucial test point in the human face region detected as face Sampled point.Because crucial test point is usually located at the edge of face, eyes, eyebrow and mouth, and the light at these positions It is usually influenced by various factors according to parameter, such as shadow occlusion causes intensity of illumination lower, not can truly reflect reality Illumination parameter under the environment of border.Therefore, in embodiments of the present invention, when obtaining face sampled point, target can be obtained first At least one crucial test point in picture in human face region, and further obtained in face according to crucial test point is got It can reflect the face sampled point of true illumination parameter at other positions.
S212, adjacent three crucial test points are carried out to trigonometric ratio processing, and more in obtained triangle interior generation A characteristic point is as the face sampled point.
Wherein, characteristic point can be the individual point of some tools of triangle interior, such as center or center of gravity.
Fig. 2 b is a kind of effect diagram that face sampled point is obtained according to key point provided by Embodiment 2 of the present invention.Such as It, can successively line (can to adjacent three crucial test points after getting the crucial test point of human face region shown in Fig. 2 b To be the solid line of necessary being, it is also possible to the connecting line of virtual presence) triangle is formed, realize the triangle to crucial test point Change processing.The characteristic point of each triangle interior is further obtained as face sampled point according to obtained triangle.Above-mentioned face The acquisition modes of sampled point can effectively avoid illumination parameter distinguishing biggish crucial test point as face sampled point, can be with The pixel of actual response illumination parameter will be capable of in face as face sampled point, to guarantee the accuracy of illumination parameter.
In an alternate embodiment of the present invention where, multiple characteristic points are generated as the people in obtained triangle interior Face sampled point may include: the determining barycenter oftriangle point, and connect the focus point and each vertex of the triangle Obtained multiple new triangles, by the new barycenter oftriangle point and barycenter oftriangle point, as the face Sampled point.
Further, when obtaining face sampled point, in addition to the barycenter oftriangle that can be formed each crucial test point As face sampled point, in order to increase the quantity of face sampled point, the triangle interior that can also be made of each crucial test point Center of gravity connect each vertex of corresponding triangle and form multiple (3) new triangles, then obtain new barycenter oftriangle conduct Face sampled point.
S220, pass through geometric method and/or training set method, the determining and matched reflection coefficient collection of the face sampled point.
Correspondingly, S220 specifically can wrap when reflection coefficient collection matched with face sampled point determining by geometric method Include following operation:
S221a, 3D face alignment model is obtained, and uses the people in the 3D face alignment model and the Target Photo Face region is aligned, and determines human face posture information corresponding with the human face region.
Wherein, 3D face alignment model can be the mould for the face in Target Photo to be compared pre-established Type.Human face posture information includes but is not limited to the information such as pitching, inclination and rotation, and human face posture information can be used for assisting determining 3D face alignment model is directed at the deflection angle after human face region.
In embodiments of the present invention, a 3D face alignment model can be aligned with the human face region detected. Normal information when not deflected due to 3D face alignment model and from block information it is known that when in its alignment target picture When human face region, each sampled point can be obtained according to the corresponding human face posture information combination 3D face alignment model of human face region Normal and from block information.
In an alternate embodiment of the present invention where, the 3D face alignment model may include: the standard pre-established 3D faceform, alternatively, using three-dimensional reconstruction, the 3D face mould that is reconstructed according to the human face region in the Target Photo Type.
Fig. 2 c is a kind of effect that face sampled point is obtained according to standard 3D faceform provided by Embodiment 2 of the present invention Schematic diagram.Wherein, the picture of label (1) is standard 3D faceform in Fig. 2 c, and the picture of label (2) is the visualization of model normal Schematic diagram, the picture of label (3) is the schematic diagram sampled to face projection print, and the picture of label (4) is using adopting The position and normal information that sample obtains carry out the illustraton of model that mesh reconstruction obtains.In embodiments of the present invention, as shown in Figure 2 c, 3D Face alignment model can be using the standard 3D faceform pre-established.It is had differences in view of the geometry of different faces, Specific face, using three-dimensional reconstruction, the 3D face mould reconstructed according to the human face region in Target Photo can also be directed to Type is as 3D face alignment model.
S222a, the multiple incident directions for generating incident ray.
Correspondingly, after 3D face alignment model is aligned with the human face region in Target Photo, it can be in Target Photo Human face region (while being also 3D face alignment model) gives the incident direction of multiple and different incident rays at random, true to simulate The light source information of human face region in scene.
S223a, according to formula:It calculates separately in each institute State incident ray ωiUnder, reflective function R (x, ω corresponding with face sampled point xi)。
Wherein, [1, m] i ∈, m are the total quantity of incident direction;ωiFor i-th of incident direction, nxFor according to the face The normal direction for the x point that posture information determines, dot () they are the point multiplication operation of vector, and max () is to be maximized operation, V (x, ωi) it is ωiSome visibility between x point on direction, ρ are the surface albedo of the face.
In embodiments of the present invention, it is assumed that face is diffusing reflection material, for given incident light direction ωi, Ke Yigen According to formulaThe corresponding reflective function R of calculating face sampled point x (x, ωi), reflective function R (x, ωi) it can be used for calculating reflection coefficient collection.
S224a, according to formula: Rj=∫SR(x,ωi)Yji), calculate the reflection R with the matched each rank of x pointj, structure At the reflection coefficient collection.
Wherein, [1, n] j ∈, n are the preset total order of expansion, and S is the hemisphere face determined at x point by the normal of x point, Yji) it is ωiThe spherical harmonic basis function of jth rank under direction.
Further, corresponding reflective function R (x, the ω of calculating face sampled point x is acquiredi) after, it can be according to formula Rj =∫SR(x,ωi)Yji), above-mentioned integrated value is estimated using monte carlo method, to solve and the matched each rank of x point Reflection Rj.Wherein, formula Rj=∫SR(x,ωi)Yji) in spherical harmonic basis function Yji) in ωiAfter determination, it is Known quantity.After acquiring the corresponding reflection coefficient of each face sampled point, the corresponding reflection coefficient of all face sampled points can be with structure At reflection coefficient collection.
It should be noted that for given face sampled point x, in reflective functionParameter is constant.Here constantLight intensity is only influenced in calculating process, the direction without influencing light is accurate to determine that light intensity parameter is needed using calibration or missed The method that difference minimizes solves parameter ρ.If being primarily upon the direction of light when determining light intensity parameter, this can be ignored often Number.If necessary to carry out accurate radiance estimate, then the method for needing to further use calibration or minimizing the error solves ρ。
It cites a plain example, after setting a numerical value for ρ first, the illumination of a picture is calculated based on the ρ Parameter, and simulation polishing is carried out to the picture again using the illumination parameter, by the analog light intensity for simulating polishing result and it is somebody's turn to do The original luminous intensity of photo is compared, if analog light intensity is greater than the original luminous intensity, can dynamically reduce ρ, if Analog light intensity is less than the original luminous intensity, then can dynamically increase ρ, with reality to the accurate estimation of ρ.
Training step is not needed with the matched reflection coefficient collection method of face sampled point by the way that geometric method is determining, therefore is not depended on In training data, while the angular pose no requirement (NR) to face.
Correspondingly, when reflection coefficient collection matched with face sampled point determining by training set method, S220 specifically can be with Including following operation:
S221b, it the face sampled point is input in advance trained reflection coefficient collection determines in model, and described in obtaining Reflection coefficient collection determine model output result as with the matched reflection coefficient collection of the face sampled point.
Wherein, the reflection coefficient collection determine model by the first quantity face respectively on the spherical surface for surrounding face the The facial image generated when the point light source irradiation of two quantity is generated as training sample training, is marked in advance in the training sample Note has the sampled point in facial image, and reflection coefficient collection corresponding with sampled point.
Wherein, reflection coefficient collection determines that model is determined for the matched reflection coefficient collection of face sampled point.Reflection system Manifold determines that the first quantity and the second quantity in model can be the numerical value set according to actual needs, and the embodiment of the present invention is simultaneously The specific value of first quantity and the second quantity is not defined.
In embodiments of the present invention, the training set that reflection coefficient collection determines that model uses (can be assumed to be by the first quantity M the product of face) and the second quantity (being assumed to be N) a point light source being distributed on the spherical surface for surrounding face forms.Relative to To the distance of face, the size of face can ignore light source, i.e., each point light source can be approximated to be a direction light source, so i.e. The corresponding illumination of every picture can be decomposed into the form of the humorous base weighting of ball.It is linear that each face sampled point can establish M × N number of Then equilibrium relationships are fitted to obtain the linear least-squares solution of problem as R using singular value decomposition or machine learning methodj。 It can determine that model describes its illumination through the above scheme to establish an individual reflection coefficient collection to each face sampled point Characteristic.
It should be noted that the face sampling point position obtained under N kind illumination condition in view of same face may Deviation (face may be can't detect in the case where worst) is had, so determine can in the training process of model for reflection coefficient collection The label of a face key test point is only carried out to a certain face.It optionally, can be in the case where illumination condition be best (one As be located at immediately ahead of face for light source) label of progress face key test point.Pass through the determination of training set method and face sampled point The human face data that the method for matched reflection coefficient collection may be equally applied to any angular pose does training, is not limited only to positive face.
S230, according to formula:Calculate each rank illumination tensor LjStructure At the illumination tensor collection.
Wherein, [1, n] j ∈, n are the preset total order of expansion;L (x) is the Pixel Information of face sampled point x.
In embodiments of the present invention, optionally, matched with face sampled point by geometric method and/or the determination of training set method It, can be according to formula after reflection coefficient collectionSet up face sampled point Linear relationship between pixel value, reflection coefficient collection and illumination tensor collection, and calculate each rank illumination tensor Lj.Wherein, above-mentioned public affairs N in formula can be a numerical value of any selection, and the value of n is bigger, and the result that it is obtained is more accurate.It optionally, can be using most Small two modes for multiplying solution solve above-mentioned formula.
S240, the basis illumination tensor collection corresponding with the face sampled point, determination are corresponding with the Target Photo Illumination parameter.
Fig. 2 d is a kind of effect signal that illumination parameter is determined according to 3D face sampled point provided by Embodiment 2 of the present invention Figure.Fig. 2 e is a kind of effect based on determining illumination parameter to virtual ads board polishing in picture provided by Embodiment 2 of the present invention Fruit schematic diagram.As shown in Figure 2 d, the determination method of illumination parameter provided by the embodiment of the present invention can truly reflect people The corresponding illumination parameter of face each point pixel.Meanwhile as shown in Figure 2 e, the determination side of illumination parameter provided by the embodiment of the present invention Method is applied to advertisement implantation, and (projection of the virtual ads board of the lower left corner Fig. 2 e " company A " pattern on desk is to virtual wide Board is accused according to the effect of determining illumination parameter polishing) field when also available good visual effect so that advertisement is implanted into Better authenticity.
It should be noted that Fig. 2 a is only a kind of schematic diagram of implementation, between S221a-S224a and S221b not There is sequencing relationship, the two can select an implementation.
By adopting the above technical scheme, determining to be with the matched reflection of face sampled point by geometric method and/or training set method Manifold can guarantee the accuracy of reflection coefficient collection, and then improve the accuracy of illumination parameter estimation.
Embodiment three
Fig. 3 is a kind of schematic diagram of the determining device for illumination parameter that the embodiment of the present invention three provides, as shown in figure 3, institute State device include: sampled point identification module 310, reflection coefficient collection determining module 320, illumination tensor collection computing module 330 and Illumination parameter determining module 340, in which:
Sampled point identification module 310, for identification at least one face sampled point in Target Photo in human face region;
Reflection coefficient collection determining module 320, for by geometric method and/or training set method, the determining and face to be sampled The matched reflection coefficient collection of point;
Illumination tensor collection computing module 330, for according to Pixel Information corresponding with the face sampled point and The reflection coefficient collection calculates illumination tensor collection corresponding with the face sampled point;
Illumination parameter determining module 340, for determining according to the illumination tensor collection corresponding with the face sampled point Illumination parameter corresponding with the Target Photo.
The embodiment of the present invention is by least one face sampled point in human face region in identification Target Photo, using geometry Method and/or training set method, the determining and matched reflection coefficient collection of face sampled point, further according to corresponding with face sampled point Pixel Information and reflection coefficient collection calculate illumination tensor collection corresponding with face sampled point, last basis and face sampled point Corresponding illumination tensor collection determines illumination parameter corresponding with Target Photo, solves to exist in existing illumination parameter estimation method Be difficult to meet lighting simulation effect and problem at high cost, can sufficiently excavate the illumination parameter of picture, relevant device is wanted It asks low, and meets the simulation effect to complex illumination, to improve the accuracy and versatility of illumination parameter estimation.
Optionally, sampled point identification module 310, comprising: crucial test point acquiring unit, for the Target Photo is defeated Enter to human-face detector, obtains at least one pass in the Target Photo that the human-face detector marks in human face region Key test point;Sampled point acquiring unit, for adjacent three crucial test points to be carried out trigonometric ratio processing, and in three obtained The angular internal multiple characteristic points of generation are as the face sampled point.
Optionally, sampled point acquiring unit is specifically used for determining the barycenter oftriangle point, and connects the focus point The multiple new triangles obtained with each vertex of the triangle, by the new barycenter oftriangle point and the triangle Focus point, as the face sampled point.
Optionally, reflection coefficient collection determining module 320 for obtaining 3D face alignment model, and uses the 3D face Comparison model is aligned with the human face region in the Target Photo, determines human face posture letter corresponding with the human face region Breath;Generate multiple incident directions of incident ray;According to formula:It calculates separately in each incident ray ωiUnder, with face Corresponding reflective function R (x, the ω of sampled point xi);Wherein, [1, m] i ∈, m are the total quantity of incident direction;ωiFor i-th of incidence Direction, nxNormal direction for the x point determined according to the human face posture information, dot () are the point multiplication operation of vector, max () To be maximized operation, V (x, ωi) it is ωiSome visibility between x point on direction, ρ are that the surface of the face is anti- According to rate;According to formula: Rj=∫SR(x,ωi)Yji), calculate the reflection R with the matched each rank of x pointj, constitute described anti- Penetrate coefficient set;Wherein, [1, n] j ∈, n are the preset total order of expansion, and S is the hemisphere face determined at x point by the normal of x point, Yji) it is ωiThe spherical harmonic basis function of jth rank under direction.
Optionally, the 3D face alignment model includes: the standard 3D faceform pre-established, alternatively, using three-dimensional Reconstruction technique, the 3D faceform reconstructed according to the human face region in the Target Photo.
Optionally, reflection coefficient collection determining module 320 is also used to for the face sampled point being input to the anti-of training in advance Coefficient set is penetrated to determine in model, and obtain the reflection coefficient collection determine model output result as with the face sampled point Matched reflection coefficient collection;Wherein, the reflection coefficient collection determines that model is surrounding face respectively by the face of the first quantity The facial image generated when the point light source irradiation of the second quantity on spherical surface is generated as training sample training, in the trained sample Label has interior sampled point, and reflection coefficient collection corresponding with sampled point in advance in this.
Optionally, illumination tensor collection computing module 330 is specifically used for according to formula: Calculate each rank illumination tensor LjConstitute the illumination tensor collection;Wherein, j ∈ [1, N], n is the preset total order of expansion;L (x) is the Pixel Information of face sampled point x.
Optionally, the Target Photo is the frame image in target video file;Described device further include: more illumination ginsengs Number obtains module, for obtaining the illumination parameter determined respectively by at least two field pictures in the target video file;Light It is obtained and the mesh according to parameter processing module for handling according to setting data processing technique each illumination parameter Mark the matched illumination parameter of video file;Wherein, the illumination parameter includes illumination incident direction and intensity of illumination.
Optionally, it is equal to seek each parameter specifically for voting each illumination parameter for illumination parameter processing module Value or median as with the matched illumination parameter of target video file;Each light is sought using time domain sliding window According to parameter moving average as with the matched illumination parameter of target video file;And use Kalman filter Each illumination parameter is filtered, is obtained and the matched illumination parameter of target video file.
The determination side of illumination parameter provided by any embodiment of the invention can be performed in the determining device of above-mentioned illumination parameter Method has the corresponding functional module of execution method and beneficial effect.The not technical detail of detailed description in the present embodiment, can join See the determination method for the illumination parameter that any embodiment of that present invention provides.
Example IV
Fig. 4 is a kind of structural schematic diagram for computer equipment that the embodiment of the present invention four provides.Fig. 4, which is shown, to be suitable for being used to Realize the block diagram of the computer equipment 412 of embodiment of the present invention.The computer equipment 412 that Fig. 4 is shown is only an example, Should not function to the embodiment of the present invention and use scope bring any restrictions.
As shown in figure 4, computer equipment 412 is showed in the form of universal computing device.The component of computer equipment 412 can To include but is not limited to: one or more processor 416, storage device 428 connect different system components (including storage dress Set 428 and processor 416) bus 418.
Bus 418 indicates one of a few class bus structures or a variety of, including memory bus or Memory Controller, Peripheral bus, graphics acceleration port, processor or the local bus using any bus structures in a variety of bus structures.It lifts For example, these architectures include but is not limited to industry standard architecture (Industry Standard Architecture, ISA) bus, microchannel architecture (Micro Channel Architecture, MCA) bus, enhancing Type isa bus, Video Electronics Standards Association (Video Electronics Standards Association, VESA) local Bus and peripheral component interconnection (Peripheral Component Interconnect, PCI) bus.
Computer equipment 412 typically comprises a variety of computer system readable media.These media can be it is any can The usable medium accessed by computer equipment 412, including volatile and non-volatile media, moveable and immovable Jie Matter.
Storage device 428 may include the computer system readable media of form of volatile memory, such as arbitrary access Memory (Random Access Memory, RAM) 430 and/or cache memory 432.Computer equipment 412 can be into One step includes other removable/nonremovable, volatile/non-volatile computer system storage mediums.Only as an example, it deposits Storage system 434 can be used for reading and writing immovable, non-volatile magnetic media, and (Fig. 4 do not show, commonly referred to as " hard drive Device ").Although not shown in fig 4, the disk for reading and writing removable non-volatile magnetic disk (such as " floppy disk ") can be provided to drive Dynamic device, and to removable anonvolatile optical disk (such as CD-ROM (Compact Disc-Read Only Memory, CD- ROM), digital video disk (Digital Video Disc-Read Only Memory, DVD-ROM) or other optical mediums) read-write CD drive.In these cases, each driver can pass through one or more data media interfaces and bus 418 It is connected.Storage device 428 may include at least one program product, which has one group of (for example, at least one) program Module, these program modules are configured to perform the function of various embodiments of the present invention.
Program 436 with one group of (at least one) program module 426, can store in such as storage device 428, this The program module 426 of sample includes but is not limited to operating system, one or more application program, other program modules and program It may include the realization of network environment in data, each of these examples or certain combination.Program module 426 usually executes Function and/or method in embodiment described in the invention.
Computer equipment 412 can also with one or more external equipments 414 (such as keyboard, sensing equipment, camera, Display 424 etc.) communication, the equipment interacted with the computer equipment 412 communication can be also enabled a user to one or more, And/or with any equipment (such as net that the computer equipment 412 is communicated with one or more of the other calculating equipment Card, modem etc.) communication.This communication can be carried out by input/output (I/O) interface 422.Also, computer Equipment 412 can also pass through network adapter 420 and one or more network (such as local area network (Local Area Network, LAN), wide area network Wide Area Network, WAN) and/or public network, such as internet) communication.As schemed Show, network adapter 420 is communicated by bus 418 with other modules of computer equipment 412.Although should be understood that in figure not It shows, other hardware and/or software module can be used in conjunction with computer equipment 412, including but not limited to: microcode, equipment Driver, redundant processing unit, external disk drive array, disk array (Redundant Arrays of Independent Disks, RAID) system, tape drive and data backup storage system etc..
The program that processor 416 is stored in storage device 428 by operation, thereby executing various function application and number According to processing, such as realize the heavy method for drafting of Activiti flow chart provided by the above embodiment of the present invention.
That is, the processing unit is realized when executing described program: identifying at least one in Target Photo in human face region A face sampled point;Pass through geometric method and/or training set method, the determining and matched reflection coefficient collection of the face sampled point;Root According to Pixel Information corresponding with the face sampled point and the reflection coefficient collection, calculate and the face sampled point pair The illumination tensor collection answered;According to the illumination tensor collection corresponding with the face sampled point, the determining and Target Photo pair The illumination parameter answered.
At least one face sampled point in Target Photo in human face region is identified by the computer equipment, and use is several What method and/or training set method, the determining and matched reflection coefficient collection of face sampled point, respectively corresponds further according to face sampled point Pixel Information and reflection coefficient collection, calculate corresponding with face sampled point illumination tensor collection, it is last according to and face sampling The corresponding illumination tensor collection of point determines illumination parameter corresponding with Target Photo, solves to deposit in existing illumination parameter estimation method Be difficult to meet lighting simulation effect and problem at high cost, the illumination parameter of picture can be sufficiently excavated, to relevant device It is required that it is low, and meet the simulation effect to complex illumination, to improve the accuracy and versatility of illumination parameter estimation.
Embodiment five
The embodiment of the present invention five also provides a kind of computer storage medium for storing computer program, the computer program When being executed by computer processor for executing the determination method of any illumination parameter of the above embodiment of the present invention: knowing At least one face sampled point in other Target Photo in human face region;By geometric method and/or training set method, it is determining with it is described The matched reflection coefficient collection of face sampled point;According to Pixel Information corresponding with the face sampled point and the reflection Coefficient set calculates illumination tensor collection corresponding with the face sampled point;According to the light corresponding with the face sampled point According to coefficient set, illumination parameter corresponding with the Target Photo is determined.
The computer storage medium of the embodiment of the present invention, can be using any of one or more computer-readable media Combination.Computer-readable medium can be computer-readable signal media or computer readable storage medium.It is computer-readable Storage medium for example may be-but not limited to-the system of electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, device or Device, or any above combination.The more specific example (non exhaustive list) of computer readable storage medium includes: tool There are electrical connection, the portable computer diskette, hard disk, random access memory (RAM), read-only memory of one or more conducting wires (Read Only Memory, ROM), erasable programmable read only memory ((Erasable Programmable Read Only Memory, EPROM) or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic Memory device or above-mentioned any appropriate combination.In this document, computer readable storage medium, which can be, any includes Or the tangible medium of storage program, which can be commanded execution system, device or device use or in connection make With.
Computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal, Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but unlimited In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can Any computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used for By the use of instruction execution system, device or device or program in connection.
The program code for including on computer-readable medium can transmit with any suitable medium, including --- but it is unlimited In wireless, electric wire, optical cable, radio frequency (Radio Frequency, RF) etc. or above-mentioned any appropriate combination.
The computer for executing operation of the present invention can be write with one or more programming languages or combinations thereof Program code, described program design language include object oriented program language-such as Java, Smalltalk, C++, Further include conventional procedural programming language --- such as " C " language or similar programming language.Program code can Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package, Part executes on the remote computer or executes on a remote computer or server completely on the user computer for part. In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (LAN) Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service Provider is connected by internet).
Note that the above is only a better embodiment of the present invention and the applied technical principle.It will be appreciated by those skilled in the art that The invention is not limited to the specific embodiments described herein, be able to carry out for a person skilled in the art it is various it is apparent variation, It readjusts and substitutes without departing from protection scope of the present invention.Therefore, although being carried out by above embodiments to the present invention It is described in further detail, but the present invention is not limited to the above embodiments only, without departing from the inventive concept, also It may include more other equivalent embodiments, and the scope of the invention is determined by the scope of the appended claims.

Claims (12)

1. a kind of determination method of illumination parameter characterized by comprising
Identify at least one face sampled point in Target Photo in human face region;
Pass through geometric method and/or training set method, the determining and matched reflection coefficient collection of the face sampled point;
According to Pixel Information corresponding with the face sampled point and the reflection coefficient collection, calculating is adopted with the face The corresponding illumination tensor collection of sampling point;
According to the illumination tensor collection corresponding with the face sampled point, illumination ginseng corresponding with the Target Photo is determined Number.
2. the method according to claim 1, wherein at least one people in identification Target Photo in human face region Face sampled point, comprising:
The Target Photo is input to human-face detector, obtains people in the Target Photo that the human-face detector marks At least one crucial test point in face region;
Adjacent three crucial test points are subjected to trigonometric ratio processing, and generates multiple characteristic points in obtained triangle interior and makees For the face sampled point.
3. according to the method described in claim 2, it is characterized in that, generating multiple characteristic point conducts in obtained triangle interior The face sampled point, comprising:
It determines the barycenter oftriangle point, and connects the focus point is obtained with each vertex of the triangle multiple new three It is angular, by the new barycenter oftriangle point and barycenter oftriangle point, as the face sampled point.
4. the method according to claim 1, wherein determination is matched with the face sampled point by geometric method Reflection coefficient collection, comprising:
3D face alignment model is obtained, and is carried out using the human face region in the 3D face alignment model and the Target Photo Alignment determines human face posture information corresponding with the human face region;
Generate multiple incident directions of incident ray;
According to formula:It calculates separately in each incident light Line ωiUnder, reflective function R (x, ω corresponding with face sampled point xi);
Wherein, [1, m] i ∈, m are the total quantity of incident direction;ωiFor i-th of incident direction, nxFor according to the human face posture The normal direction for the x point that information determines, dot () are the point multiplication operation of vector, and max () is to be maximized operation, V (x, ωi) be ωiSome visibility between x point on direction, ρ are the surface albedo of the face;
According to formula: Rj=∫SR (x, ωi)Yji), calculate the reflection R with the matched each rank of x pointj, constitute the reflection Coefficient set;
Wherein, [1, n] j ∈, n are the preset total order of expansion, and S is the hemisphere face determined at x point by the normal of x point, Yji) For ωiThe spherical harmonic basis function of jth rank under direction.
5. according to the method described in claim 4, it is characterized in that, the 3D face alignment model includes:
The standard 3D faceform pre-established, or
Using three-dimensional reconstruction, the 3D faceform reconstructed according to the human face region in the Target Photo.
6. the method according to claim 1, wherein being determined and the face sampled point by training set method The reflection coefficient collection matched, comprising:
The face sampled point is input to reflection coefficient collection trained in advance to determine in model, and obtains the reflection coefficient collection Determine model output result as with the matched reflection coefficient collection of the face sampled point;
Wherein, the reflection coefficient collection determine model by the first quantity face respectively surround face spherical surface on second number The facial image that generates is generated as training sample training when the point light source irradiation of amount, marked in advance in the training sample have Sampled point in facial image, and reflection coefficient collection corresponding with sampled point.
7. the method according to claim 1, wherein being believed according to pixel corresponding with the face sampled point Breath and the reflection coefficient collection calculate illumination tensor collection corresponding with the face sampled point, comprising:
According to formula:Calculate each rank illumination tensor LjConstitute the light According to coefficient set;
Wherein, [1, n] j ∈, n are the preset total order of expansion;L (x) is the Pixel Information of face sampled point x.
8. the method according to claim 1, wherein the Target Photo is the frame figure in target video file Picture;
According to the illumination tensor collection corresponding with the face sampled point, illumination ginseng corresponding with the Target Photo is determined After number, further includes:
Obtain the illumination parameter determined respectively by at least two field pictures in the target video file;
Each illumination parameter is handled according to setting data processing technique, is obtained matched with the target video file Illumination parameter;
Wherein, the illumination parameter includes illumination incident direction and intensity of illumination.
9. according to the method described in claim 8, it is characterized in that, according to setting data processing technique to each illumination parameter It is handled, including at least one of following:
Vote each illumination parameter, seek each mean parameter or median as with the target video file The illumination parameter matched;
The moving average for using time domain sliding window to seek each illumination parameter is matched as with the target video file Illumination parameter;And
Each illumination parameter is filtered using Kalman filter, is obtained matched with the target video file Illumination parameter.
10. a kind of determining device of illumination parameter characterized by comprising
Sampled point identification module, for identification at least one face sampled point in Target Photo in human face region;
Reflection coefficient collection determining module, for by geometric method and/or training set method, determination to be matched with the face sampled point Reflection coefficient collection;
Illumination tensor collection computing module, for according to Pixel Information corresponding with the face sampled point and the reflection Coefficient set calculates illumination tensor collection corresponding with the face sampled point;
Illumination parameter determining module, for according to the illumination tensor collection corresponding with the face sampled point, it is determining with it is described The corresponding illumination parameter of Target Photo.
11. a kind of computer equipment, which is characterized in that the equipment includes:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real The now determination method of the illumination parameter as described in any in claim 1-9.
12. a kind of computer storage medium, is stored thereon with computer program, which is characterized in that the program is executed by processor The determination method of illumination parameter of the Shi Shixian as described in any in claim 1-9.
CN201811108374.XA 2018-09-21 2018-09-21 Method, device and equipment for determining illumination parameters and storage medium Active CN109214350B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811108374.XA CN109214350B (en) 2018-09-21 2018-09-21 Method, device and equipment for determining illumination parameters and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811108374.XA CN109214350B (en) 2018-09-21 2018-09-21 Method, device and equipment for determining illumination parameters and storage medium

Publications (2)

Publication Number Publication Date
CN109214350A true CN109214350A (en) 2019-01-15
CN109214350B CN109214350B (en) 2020-12-22

Family

ID=64985375

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811108374.XA Active CN109214350B (en) 2018-09-21 2018-09-21 Method, device and equipment for determining illumination parameters and storage medium

Country Status (1)

Country Link
CN (1) CN109214350B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109883414A (en) * 2019-03-20 2019-06-14 百度在线网络技术(北京)有限公司 A kind of automobile navigation method, device, electronic equipment and storage medium
CN110310224A (en) * 2019-07-04 2019-10-08 北京字节跳动网络技术有限公司 Light efficiency rendering method and device
CN110428491A (en) * 2019-06-24 2019-11-08 北京大学 Three-dimensional facial reconstruction method, device, equipment and medium based on single-frame images
WO2022237116A1 (en) * 2021-05-10 2022-11-17 北京达佳互联信息技术有限公司 Image processing method and apparatus

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110234590A1 (en) * 2010-03-26 2011-09-29 Jones Michael J Method for Synthetically Relighting Images of Objects
CN102346857A (en) * 2011-09-14 2012-02-08 西安交通大学 High-precision method for simultaneously estimating face image illumination parameter and de-illumination map
CN105894050A (en) * 2016-06-01 2016-08-24 北京联合大学 Multi-task learning based method for recognizing race and gender through human face image
US20180181831A1 (en) * 2015-11-20 2018-06-28 Infinity Augmented Reality Israel Ltd. Method and a system for determining radiation sources characteristics in a scene based on shadowing analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110234590A1 (en) * 2010-03-26 2011-09-29 Jones Michael J Method for Synthetically Relighting Images of Objects
CN102346857A (en) * 2011-09-14 2012-02-08 西安交通大学 High-precision method for simultaneously estimating face image illumination parameter and de-illumination map
US20180181831A1 (en) * 2015-11-20 2018-06-28 Infinity Augmented Reality Israel Ltd. Method and a system for determining radiation sources characteristics in a scene based on shadowing analysis
CN105894050A (en) * 2016-06-01 2016-08-24 北京联合大学 Multi-task learning based method for recognizing race and gender through human face image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
庄连生: "复杂光照条件下人脸识别关键算法研究", 《中国博士学位论文全文数据库》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109883414A (en) * 2019-03-20 2019-06-14 百度在线网络技术(北京)有限公司 A kind of automobile navigation method, device, electronic equipment and storage medium
CN110428491A (en) * 2019-06-24 2019-11-08 北京大学 Three-dimensional facial reconstruction method, device, equipment and medium based on single-frame images
CN110428491B (en) * 2019-06-24 2021-05-04 北京大学 Three-dimensional face reconstruction method, device, equipment and medium based on single-frame image
CN110310224A (en) * 2019-07-04 2019-10-08 北京字节跳动网络技术有限公司 Light efficiency rendering method and device
WO2022237116A1 (en) * 2021-05-10 2022-11-17 北京达佳互联信息技术有限公司 Image processing method and apparatus

Also Published As

Publication number Publication date
CN109214350B (en) 2020-12-22

Similar Documents

Publication Publication Date Title
Bogdan et al. DeepCalib: A deep learning approach for automatic intrinsic calibration of wide field-of-view cameras
Itoh et al. Interaction-free calibration for optical see-through head-mounted displays based on 3d eye localization
JP6246757B2 (en) Method and system for representing virtual objects in field of view of real environment
US20180189974A1 (en) Machine learning based model localization system
Faugeras et al. 3-d reconstruction of urban scenes from image sequences
Mastin et al. Automatic registration of LIDAR and optical images of urban scenes
US8811674B2 (en) Incorporating video meta-data in 3D models
US9270974B2 (en) Calibration between depth and color sensors for depth cameras
US11748906B2 (en) Gaze point calculation method, apparatus and device
EP2375376B1 (en) Method and arrangement for multi-camera calibration
Klein Visual tracking for augmented reality
CN109214350A (en) A kind of determination method, apparatus, equipment and the storage medium of illumination parameter
US20210124917A1 (en) Method for automatically generating hand marking data and calculating bone length
CN107452031B (en) Virtual ray tracking method and light field dynamic refocusing display system
Carvalho et al. Exposing photo manipulation from user-guided 3d lighting analysis
CN108805917A (en) Sterically defined method, medium, device and computing device
Forbes et al. Using silhouette consistency constraints to build 3D models
US20220405968A1 (en) Method, apparatus and system for image processing
Zheng et al. What does plate glass reveal about camera calibration?
Deligianni et al. Patient-specific bronchoscope simulation with pq-space-based 2D/3D registration
You et al. Waterdrop stereo
Gava et al. Dense scene reconstruction from spherical light fields
Bailey Defocus modelling for 3D reconstruction and rendering
Hong et al. A novel Gravity-FREAK feature extraction and Gravity-KLT tracking registration algorithm based on iPhone MEMS mobile sensor in mobile environment
Jiddi Photometric registration of indoor real scenes using an RGB-D camera with application to mixed reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20190115

Assignee: Beijing Intellectual Property Management Co.,Ltd.

Assignor: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.

Contract record no.: X2023110000095

Denomination of invention: A method, device, device, and storage medium for determining lighting parameters

Granted publication date: 20201222

License type: Common License

Record date: 20230821

EE01 Entry into force of recordation of patent licensing contract