WO2022011621A1 - 一种人脸光照图像生成装置及方法 - Google Patents

一种人脸光照图像生成装置及方法 Download PDF

Info

Publication number
WO2022011621A1
WO2022011621A1 PCT/CN2020/102222 CN2020102222W WO2022011621A1 WO 2022011621 A1 WO2022011621 A1 WO 2022011621A1 CN 2020102222 W CN2020102222 W CN 2020102222W WO 2022011621 A1 WO2022011621 A1 WO 2022011621A1
Authority
WO
WIPO (PCT)
Prior art keywords
illumination
face
map
lighting
image
Prior art date
Application number
PCT/CN2020/102222
Other languages
English (en)
French (fr)
Inventor
刘思远
梁嘉旺
甘启
夏璐
罗燕飞
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2020/102222 priority Critical patent/WO2022011621A1/zh
Priority to CN202080005608.7A priority patent/CN114207669A/zh
Publication of WO2022011621A1 publication Critical patent/WO2022011621A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects

Definitions

  • the present application relates to the field of image processing, and in particular, to a device and method for generating a face illumination image.
  • AR augmented reality
  • VR virtual reality
  • the lighting consistency between AR/VR objects and real scenes is an important indicator for evaluating the fusion effect.
  • the geometry of the face has significant common features and is often used as a vector for illumination estimation.
  • a large number of face images carrying illumination parameters are usually required to train an illumination estimation model, and then illumination estimation is realized based on the illumination estimation model.
  • face images with illumination parameters have the problems of difficult acquisition and high cost.
  • the present application provides an apparatus and method for generating a face illumination image, which are used to reduce the difficulty and cost of obtaining a face illumination image.
  • a device for generating a face illumination image includes: a camera for capturing a face image and transmitting it to a processor.
  • the face image can be an RGB image;
  • the processor is used for: processing A face image, for example, decompose the face image using an eigendecomposition algorithm to obtain an albedo map and a normal vector map; use at least one set of lighting parameters to render the normal vector map (that is, the normal vector map shows what the set of lighting parameters is.
  • each group of illumination parameters in the at least one set of illumination parameters is used to render one illumination map in the at least one illumination map; according to the at least one illumination map and the albedo map Generate at least one face illumination image (for example, combine or fuse the illumination map and the albedo map to obtain a face illumination image), and the illumination parameters of each face illumination image are the illumination parameters of the illumination map corresponding to the face illumination image .
  • the face image is decomposed into an albedo map and a normal vector map, and at least one set of illumination parameters is used to render the normal vector map, and according to the at least one illumination obtained by rendering.
  • the image and the albedo map correspond to generate at least one face illumination image, so that a large number of face illumination images with illumination parameters can be obtained through one face image, thereby reducing the difficulty and cost of obtaining face illumination images.
  • the face image is obtained by shooting a real face, which can make the face illumination image obtained based on the face image have better practicability, and then the face illumination estimation based on the face illumination image can be further improved. Accuracy of face estimation.
  • the processor is further configured to: crop the face image, for example, through face detection and extraction, to obtain the face region in the face image; use an eigendecomposition algorithm to decompose face area to get albedo map and normal vector map.
  • the albedo map includes face texture information in a face image, and the albedo map may refer to an image of a face region after removing illumination.
  • the face texture information includes Texture information of eyes, eyebrows, nose, ears, mouth, etc.
  • the normal vector map includes the geometric shape information of the face in the face image, and the normal vector map can refer to the three-dimensional structure map of the face area, for example, the geometric shape of the face
  • the information includes shape information of eyes, eyebrows, nose, ears, mouth, and the like.
  • the processor is further configured to: select at least one set of lighting parameters from multiple sets of lighting parameters in the lighting information database, where the lighting information database includes multiple sets of lighting parameters; optionally,
  • the lighting information database includes: a lighting spherical harmonic database, or an environment map database; wherein, the multiple sets of lighting parameters in the lighting spherical harmonic database include multiple sets of lighting spherical harmonic coefficients, and each lighting spherical harmonic coefficient is a set of lighting parameters, and the environment map database
  • the multiple sets of lighting parameters in include multiple environment maps, and each environment map is a set of lighting parameters.
  • the processor is further configured to: select at least one set of lighting parameters from multiple sets of lighting parameters according to a preset lighting direction or a preset lighting intensity.
  • a preset lighting direction or a preset lighting intensity by flexibly setting or selecting the preset illumination direction or preset illumination intensity, a face illumination image that meets the actual needs can be obtained, thereby improving the pertinence of the face illumination image.
  • the processor is further configured to: train a neural network model for performing illumination estimation on the target face image according to at least one face illumination image; for example, at least one face
  • the illumination image includes multiple face illumination images
  • the processor can use the neural network to train the multiple face illumination images to obtain a training model for illumination estimation, and use the training model to perform illumination estimation on the target face image to Get the lighting parameters of the target image.
  • the face image is obtained by photographing a real face, so that only a face illumination image obtained based on the face image has better practicability, and then the human face illumination image is used to perform a human face image. When estimating face illumination, the accuracy of face estimation can be further improved.
  • a device for generating a face illumination image includes: a camera unit for capturing a face image; a preprocessing unit for processing the face image to obtain an albedo map and a normal vector map; illumination;
  • the migration unit is further configured to respectively use at least one set of illumination parameters to render the normal vector map to obtain at least one illumination map, wherein each group of illumination parameters in the at least one set of illumination parameters is used to render one of the at least one illumination map generate an illumination map; generate at least one face illumination image according to the at least one illumination map and the albedo map, and the illumination parameters of each face illumination image are the illumination parameters of the illumination map corresponding to the face illumination image.
  • the preprocessing unit includes: a face extraction unit for cropping the face image to obtain a face region in the face image; an eigendecomposition unit for using eigendecomposition The algorithm decomposes the face region to obtain the albedo map and normal vector map.
  • the albedo map includes face texture information in the face image
  • the normal vector map includes face geometry information in the face image
  • the apparatus further includes: a storage unit configured to select at least one set of lighting parameters from multiple sets of lighting parameters in a lighting information database, where the lighting information database includes multiple sets of lighting parameters.
  • the storage unit is further configured to: select at least one set of lighting parameters from multiple sets of lighting parameters according to a preset lighting direction or a preset lighting intensity.
  • the lighting information database includes: a lighting spherical harmonic database or an environment map database; wherein the multiple sets of lighting parameters in the lighting spherical harmonic database include multiple sets of lighting spherical harmonic coefficients, and each lighting The spherical harmonic coefficient is a set of lighting parameters, and the multiple sets of lighting parameters in the environment map database include multiple environment maps, each of which is a set of lighting parameters.
  • the apparatus further includes: an illumination estimation unit, configured to train a neural network model for performing illumination estimation on the target face image according to at least one face illumination image.
  • a method for generating a face illumination image comprising: photographing a face image; processing the face image to obtain an albedo map and a normal vector map; respectively using at least one set of illumination parameters to render the normal vector map, And according to the at least one illumination map and the albedo map obtained by rendering, at least one face illumination image is correspondingly generated, and the illumination parameters of each face illumination image are illumination parameters of the illumination map corresponding to the face illumination image.
  • processing a face image to obtain an albedo map and a normal vector map includes: cropping the face image to obtain a face region in the face image; using eigendecomposition The algorithm decomposes the face region to obtain the albedo map and normal vector map.
  • the albedo map includes face texture information in the face image
  • the normal vector map includes face geometry information in the face image
  • the method before using at least one set of lighting parameters to render the normal vector map, the method further includes: selecting at least one set of lighting parameters from a lighting information database, where the lighting information database includes multiple sets of lighting parameter.
  • selecting at least one set of lighting parameters from the lighting information database includes: selecting at least one set of lighting parameters from multiple sets of lighting parameters according to a preset lighting direction or preset lighting intensity.
  • the lighting information database is one of the following: a lighting spherical harmonics database, an environment map database; wherein the multiple sets of lighting parameters in the lighting spherical harmonics database include multiple sets of lighting spherical harmonics Coefficient, each lighting spherical harmonic coefficient is a set of lighting parameters, the multiple sets of lighting parameters in the environment map database include multiple environment maps, and each environment map is a set of lighting parameters.
  • the method further includes: training a neural network model for performing illumination estimation on the target face image according to at least one face illumination image.
  • a device for generating a face illumination image comprising: a processor and a memory, the memory stores instructions, and the processor runs the instructions in the memory to perform the following operations: receiving a face image; processing the face image , to obtain an albedo map and a normal vector map; use at least one set of illumination parameters to render the normal vector map to obtain at least one illumination map, wherein each set of illumination parameters in the at least one set of illumination parameters is used to render at least one illumination map One illumination map of ; according to at least one illumination map and albedo map, at least one face illumination image is generated.
  • the processor is further configured to perform the following steps: crop the face image to obtain a face region in the face image; decompose the face region using an eigendecomposition algorithm to obtain Albedo map and normal vector map.
  • the albedo map includes face texture information in the face image
  • the normal vector map includes face geometry information in the face image
  • the processor further performs the following operation: select at least one set of lighting parameters from multiple sets of lighting parameters in the lighting information database.
  • the processor further performs the following operation: selecting at least one set of lighting parameters from multiple sets of lighting parameters according to a preset lighting direction or a preset lighting intensity.
  • the lighting information database includes: a lighting spherical harmonic database, or an environment map database; wherein the multiple sets of lighting parameters in the lighting spherical harmonic database include multiple sets of lighting spherical harmonic coefficients, each The lighting spherical harmonic coefficient is a set of lighting parameters.
  • the multiple sets of lighting parameters in the environment map database include multiple environment maps, and each environment map is a set of lighting parameters.
  • the processor further performs the following operation: training a neural network model for performing illumination estimation on the target face image according to at least one face illumination image.
  • a fifth aspect provides an apparatus for generating a face illumination image, the apparatus comprising a processor and an interface, wherein the processor is configured to receive a face image through the interface, and perform processing operations as follows: processing the face image, to Obtaining an albedo map and a normal vector map; rendering the normal vector map using at least one set of lighting parameters to obtain at least one illumination map, wherein each set of lighting parameters in the at least one set of lighting parameters is used to render one of the at least one illumination map. an illumination map; according to the at least one illumination map and the albedo map, at least one face illumination image is generated.
  • the processor further performs the following operations: cropping the face image to obtain the face region in the face image; decomposing the face region using an eigendecomposition algorithm to obtain the albedo Graphs and normal vector graphs.
  • the albedo map includes face texture information in the face image
  • the normal vector map includes face geometry information in the face image
  • the processor further performs the following operation: selecting at least one set of lighting parameters from multiple sets of lighting parameters in the lighting information database.
  • the processor further performs the following operation: select at least one set of lighting parameters from multiple sets of lighting parameters according to a preset lighting direction or a preset lighting intensity.
  • the lighting information database includes: a lighting spherical harmonic database, or an environment map database; wherein the multiple sets of lighting parameters in the lighting spherical harmonic database include multiple sets of lighting spherical harmonic coefficients, each The lighting spherical harmonic coefficient is a set of lighting parameters.
  • the multiple sets of lighting parameters in the environment map database include multiple environment maps, and each environment map is a set of lighting parameters.
  • the processor further performs the following operations: training a neural network model for estimating illumination on the target face image according to at least one face illumination image.
  • a computer-readable storage medium where instructions are stored in the computer-readable storage medium, and when the computer-readable storage medium is run on a computer, the computer is made to execute the third aspect or the third aspect.
  • Another aspect of the present application provides a computer program product, which when the computer program product runs on a computer, causes the computer to execute the method provided by the third aspect or any possible implementation manner of the third aspect.
  • FIG. 1 is a schematic structural diagram of an image processing device according to an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a method for generating a face illumination image according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of a face region cropping provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an intrinsically decomposed face region provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of at least one face illumination image provided by an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of another method for generating a face illumination image according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of illumination estimation of a target face image provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an apparatus for generating a face illumination image according to an embodiment of the present application.
  • “at least one” means one or more, and “plurality” means two or more.
  • “And/or”, which describes the association relationship of the associated objects, indicates that there can be three kinds of relationships, for example, A and/or B, which can indicate: the existence of A alone, the existence of A and B at the same time, and the existence of B alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the associated objects are an "or” relationship.
  • words such as “first” and “second” do not limit the quantity and execution order.
  • FIG. 1 is a schematic structural diagram of an image processing device provided by an embodiment of the present application.
  • the image processing device may be a mobile phone, a tablet computer, a video camera, a camera, a computer, a wearable device, a vehicle-mounted device, or a portable device.
  • the above-mentioned devices or the above-mentioned devices with built-in chip systems are collectively referred to as image processing devices in this application.
  • the embodiments of the present application are described by taking the image processing device as a mobile phone as an example.
  • the mobile phone or a chip system built in the mobile phone includes: a memory 101 , a processor 102 , a sensor component 103 , a multimedia component 104 , and an input/output interface 105 .
  • a memory 101 a memory 101 , a processor 102 , a sensor component 103 , a multimedia component 104 , and an input/output interface 105 .
  • various components of a mobile phone or a chip system built in a mobile phone will be introduced in detail with reference to FIG. 1 .
  • the memory 101 can be used to store data, software programs and modules; it mainly includes a stored program area and a stored data area, wherein the stored program area can store software programs, including instructions formed by code, including but not limited to an operating system, at least one function required applications, such as sound playback function, image playback function, etc.; the storage data area can store data created according to the use of the mobile phone, such as audio data, image data, phone book, etc.
  • the memory 101 may be used to store a face image, a lighting information database, an image to be evaluated, and the like.
  • the memory may include floppy disks, hard disks such as built-in hard disks and removable hard disks, magnetic disks, optical disks, magneto-optical disks such as CD_ROM, DCD_ROM, non-volatile storage Devices such as RAM, ROM, PROM, EPROM, EEPROM, flash memory, or any other form of storage medium known in the art.
  • the processor 102 is the control center of the mobile phone, using various interfaces and lines to connect various parts of the entire device, by running or executing the software programs and/or software modules stored in the memory 101, and calling the data stored in the memory 101, Perform various functions of the mobile phone and process data to monitor the mobile phone as a whole.
  • the processor 102 may be configured to execute one or more steps in the method embodiments of the present application, for example, the processor 102 may be configured to execute one or more of S202 to S204 in the following method embodiments step.
  • the processor 102 may be a single-processor architecture, a multi-processor architecture, a single-threaded processor, a multi-threaded processor, etc.; in some possible embodiments, the processor 102 may include a central processing unit At least one of a unit, a general purpose processor, a digital signal processor, a neural network processor, an image processing unit, an image signal processor, a microcontroller or a microprocessor, and the like. In addition, the processor 102 may further include other hardware circuits or accelerators, such as application specific integrated circuits, field programmable gate arrays or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof.
  • the processor 102 may implement or execute the various exemplary logical blocks, modules and circuits described in connection with this disclosure.
  • the processor 102 may also be a combination that implements computing functions, such as a combination of one or more microprocessors, a combination of a digital signal processor and a microprocessor, and the like.
  • the sensor assembly 103 includes one or more sensors for providing various aspects of the status assessment of the cell phone.
  • the sensor assembly 103 may comprise a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications, i.e. to be an integral part of a camera or camera.
  • the sensor component 103 may be used to support the camera in the multimedia component 104 to acquire face images and the like.
  • the sensor component 103 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor, and the sensor component 103 can detect the acceleration/deceleration, orientation, opening/closing state of the mobile phone, relative positioning of the components, or Changes in the temperature of the phone, etc.
  • the multimedia component 104 provides an output interface screen between the cell phone and the user, the screen may be a touch panel, and when the screen is a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action.
  • the multimedia component 104 further includes at least one camera, for example, the multimedia component 104 includes a front camera and/or a rear camera.
  • the front-facing camera and/or the rear-facing camera can sense external multimedia signals, which are used to form image frames.
  • Each of the front and rear cameras can be a fixed optical lens system or have focal length and optical zoom capability.
  • the camera in the multimedia component 104 may be used to support acquisition of face images and the like.
  • the input/output interface 105 provides an interface between the processor 102 and a peripheral interface module.
  • the peripheral interface module may include a keyboard, a mouse, or a USB (Universal Serial Bus) device.
  • the input interface may be used to obtain the image to be evaluated, the face image, etc.; the output interface may be used to obtain the illumination parameters of the image to be evaluated, and the like.
  • the input/output interface 105 may have only one input/output interface, or may have multiple input/output interfaces.
  • the mobile phone may also include an audio component, a communication component, and the like.
  • the audio component includes a microphone
  • the communication component includes a wireless fidelity (WiFi) module, a Bluetooth module, and the like, which are not repeated in this embodiment of the present application.
  • WiFi wireless fidelity
  • Bluetooth Bluetooth module
  • FIG. 2 is a schematic flowchart of a method for generating a face illumination image provided by an embodiment of the present application.
  • the method may be applied to an image processing device including a camera and a processor.
  • the image processing device may be the one shown in FIG. 1 .
  • the method includes the following steps.
  • the image processing device may include one or more cameras, the one or more cameras may include a front camera and a rear camera, and the one or more cameras may be a monocular camera or a binocular camera. Any one of the one or more cameras can be used to capture a face image, and can transmit the captured face image to the processor in the image processing device, and the face image can include red (red) , R), green (green, G), and blue (blue, B) three-channel RGB images or BGR images, or images in other formats such as YUV (Y indicates brightness, U and V indicate color difference).
  • the face image may be a front image or a side image of the face, or the like.
  • the albedo map and the normal vector map may correspond to the face area in the face image, and the face area may be part or all of the area in the face image, when the face area is a partial area in the face image , the face area includes at least the face part in the face image.
  • the albedo map may refer to the image of the face area after removing the illumination, that is, the albedo map includes the face texture information of the face area, for example, the face texture information includes eyes, eyebrows, nose, ears and mouth, etc. texture information.
  • the first normal vector (normal) map may refer to a three-dimensional structural map of the face region, that is, the normal vector map includes the face geometric shape information of the face region, for example, the face geometric shape information includes eyes, eyebrows, noses, ears. and shape information such as the mouth.
  • the processor in the image processing device may crop the face image when receiving the face image, for example, as shown in FIG. 3 .
  • the processor can detect and crop the face area in the face image through a face frame detection algorithm to obtain the face area, and the face frame detection algorithm here can detect the position of the face area.
  • the processor can decompose the face region using an eigendecomposition algorithm to obtain an albedo map and a normal vector map.
  • the eigendecomposition algorithm can include an encoder, two Feature extraction module (f A and f N ) and two decoders (decoder), the encoder is used to extract the common features of the shallow layer of the face region (that is, the features included in the albedo map and the normal vector map), the feature extraction module f A is used to extract the features included in the albedo map, and the feature extraction module f N is used to extract the features included in the normal vector map.
  • the two feature extraction modules (f A and f N ) are decoded by a decoder respectively.
  • the albedo map and normal vector map can be obtained.
  • the processor When the face area is the whole area of the face image, when the processor receives the face image, it can directly decompose the face image to obtain the albedo map and the normal vector map.
  • the specific process of decomposing the face image is consistent with the above-mentioned process of decomposing the face region. For details, please refer to the above-mentioned related description, which will not be repeated in this embodiment of the present application.
  • the albedo map in this embodiment of the present application can be used to represent the albedo of the image
  • the normal vector map can be used to represent the normal vector of the image, that is, the albedo and normal vector of the face image are presented in the form of images respectively.
  • S203 Render the normal vector image by using at least one set of lighting parameters to obtain at least one shading image, and generate at least one face lighting image according to the at least one shading image and the albedo map.
  • each group of illumination parameters in the at least one group of illumination parameters is used to render an illumination image in at least one illumination image
  • the illumination parameter of each face illumination image is the illumination parameter of the illumination image corresponding to the face illumination image .
  • At least one group of lighting parameters may include one or more groups of lighting parameters, and each group of lighting parameters may include multiple lighting parameters.
  • each group of lighting parameters may be specifically determined by spherical harmonic coefficients of lighting or environment maps. characterization.
  • a set of illumination parameters is specifically a set of illumination spherical harmonic coefficients, and the set of illumination spherical harmonic coefficients includes 27 spherical harmonic coefficients, wherein each of the three channels of R, G, and B corresponds to 9 spherical harmonic coefficients.
  • the processor in the image processing device may use the group of lighting parameters to render the normal vector map, that is, the normal vector map shows the corresponding lighting parameters of the group of lighting parameters. Illumination effect to obtain an illumination map, and the illumination parameters of the illumination map are the illumination parameters of the group.
  • a face illumination image is correspondingly generated according to the illumination map and the albedo map, that is, the illumination map and the albedo map are combined to obtain a face illumination image, and the illumination parameters of the face illumination image are is the illumination parameter of this illumination map. Therefore, the above-mentioned at least one set of illumination parameters can correspondingly generate at least one face illumination image.
  • the processor can use the illumination equation based on the Lambertian assumption (as shown in the following formula (1)), and use this set of illumination parameters.
  • the illumination spherical harmonic coefficient renders the normal vector map to obtain the illumination map; then, according to the following formula (2), the illumination map and the albedo map are combined to obtain the face illumination image.
  • Shading(R/G/B) represents the image of any channel in the illumination map (for example, represents the R channel, G channel or B channel of the illumination map), and L lm represents the second-order illumination spherical harmonic corresponding to this channel.
  • Y lm represents the second-order spherical harmonic base (the second-order spherical harmonic base is calculated according to the normal vector map), l and m are integers, Image represents the face illumination image, Albedo represents the albedo map, Shading (R, G, B) represents the illumination pattern composed of three channels (ie, R, G, B).
  • the above Y lm includes Y 0,0 , Y 1,0 , Y 0,-1 , Y 0,1 , Y 2,0 , Y 2,-1 , Y 2,1 , Y 2,-2 and Y 2,2 , the specific values are as follows.
  • x, y, and z in Y lm represent the values of R, G, and B for each pixel in the normal vector map, respectively.
  • Y lm , Shading, Image and Albedo are all used to represent matrices, and the size of the matrix is consistent with the size of the face image.
  • At least one set of illumination parameters includes multiple sets of illumination spherical harmonic coefficients, normal vector maps and albedo maps as shown in FIG.
  • the face illumination image can be shown in Figure 5.
  • the method may further include: selecting at least one set of lighting parameters from a lighting information database, where the lighting information database includes multiple sets of lighting parameters.
  • the illumination information database may be stored in an image processing device, such as a memory, and the illumination information database includes a large number of group illumination parameters.
  • the illumination information database may be the illumination spherical harmonic database, and the illumination spherical harmonic database may include hundreds of thousands of sets of illumination spherical harmonic coefficients, each group of illumination spherical harmonic coefficients corresponds to a set of illumination parameters; in practical applications, the illumination spherical harmonics
  • the spherical harmonic database may be second-order or third-order, which is not specifically limited in this embodiment of the present application.
  • the lighting information database may also be an environment map database, where the environment map database includes a large number of environment maps, and each environment map may correspond to a set of lighting parameters.
  • the illumination information database may be dynamically changed, for example, the processor in the image processing device may update the illumination information database periodically or aperiodically to ensure the freshness of the illumination information database.
  • the processor in the image processing device selects at least one set of lighting parameters from the lighting information database, it may randomly select at least one set of lighting parameters, or select at least one set of lighting parameters according to a preset lighting direction, or select at least one set of lighting parameters according to a preset lighting direction.
  • Light Intensity selects at least one set of lighting parameters.
  • the preset illumination direction and preset illumination intensity may be set in advance, and the same or different preset illumination directions and preset illumination intensity may be set for different face images.
  • the method may further include: S204: Perform illumination estimation on the target face image according to at least one face illumination image.
  • the processor may perform illumination estimation on the target face image according to the at least one face illumination image, for example, the at least one face illumination image includes multiple faces Illumination image, the processor can use the neural network to train the multiple face illumination images to obtain a training model for illumination estimation, that is, a neural network model, also referred to as a model, and use the training model to illuminate the target face image. Estimation to get the illumination parameters of the target image.
  • the training model is used to perform illumination estimation on these multiple images, and the illumination parameters of each image obtained can be represented by a visual sphere .
  • the target face image includes three images, and the illumination directions of the three images are upper, upper right, and lower right as an example for illustration.
  • the process of using multiple images for training and the process of using the trained model to perform illumination estimation on the target face image may be two separate processes, which are not limited in this embodiment.
  • how to train and generate a neural network model or how to use a neural network model to estimate or predict data reference may be made to the description in the prior art, which is not described in detail in this embodiment.
  • the image processing device may capture a real face image, decompose the face area in the face image into an albedo map and a normal vector map, and use at least one set of illumination parameters to render the normal vector respectively.
  • At least one face illumination image is generated according to at least one illumination map and albedo map obtained by rendering, so that a large number of face illumination images with illumination parameters can be obtained through one face image, thereby reducing the The difficulty and cost of obtaining face illumination images.
  • the face image is obtained by shooting a real face, which can make the face illumination image obtained based on the face image have better practicability, and then the face illumination estimation based on the face illumination image can be further improved. Accuracy of face estimation.
  • the image processing device includes corresponding hardware structures and/or software modules for executing each function.
  • the present application can be implemented in hardware or a combination of hardware and computer software with the structures and algorithm steps of the examples described in conjunction with the embodiments disclosed herein. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
  • functional modules may be divided according to the face illumination image generating apparatus corresponding to the above method example.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. middle.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. It should be noted that, the division of modules in the embodiments of the present application is schematic, and is only a logical function division, and there may be other division manners in actual implementation.
  • FIG. 8 shows a possible schematic structural diagram of the device for generating a face illumination image involved in the above embodiment, and the device may be an image processing device or an image processing device. built-in chip.
  • the apparatus includes: a camera unit 301 , a preprocessing unit 302 and a light migration unit 303 ; optionally, the apparatus further includes: a storage unit 304 and/or a light estimation unit 305 .
  • the camera unit 301 is used to support S201 in the embodiment of the method for executing the device;
  • the preprocessing unit 302 is used for supporting S202 in the embodiment of the method for executing the device;
  • the illumination migration unit 303 is used for executing the embodiment of the method for executing the device.
  • the storage unit 304 may be configured to store a lighting information database, and select at least one set of lighting parameters from the lighting information database; the lighting estimation unit 305 is configured to perform S204 in this embodiment of the apparatus execution method.
  • the preprocessing unit 302 may include a face extraction unit 3021 for cropping a face region, and an eigendecomposition unit 3022 for eigendecomposition processing. For details, please refer to the description of the above embodiments.
  • any of the above-mentioned units such as the preprocessing unit 302, the illumination migration unit 303 and the illumination estimation unit 305, may be implemented in the form of software, for example, the software programs corresponding to these three units are included in the memory, and the processor The function corresponding to each unit is realized by running the software program included in the memory.
  • the above-mentioned preprocessing unit 302 , light migration unit 303 and light estimation unit 305 can also be implemented in the form of hardware, for example, these three units can be hardware circuits or accelerators included in the processor or directly used Instead of the processor, each of the three units may be implemented by a hardware circuit or an accelerator, and may include at least one of electronic circuits, digital circuits, logic circuits, or analog circuits.
  • a device for generating a face illumination image in an embodiment of the present application is described above from the perspective of a modular functional entity, and a device for generating a human face illumination image in an embodiment of the present application is described below from the perspective of a hardware entity.
  • the above-mentioned camera unit 301 may correspond to a camera or a circuit interface of the camera in the hardware entity
  • the preprocessing unit 302 the illumination migration unit 303 and the illumination estimation unit 305 may correspond to the processor in the hardware entity
  • the storage unit 304 may correspond to the memory in the hardware entity. Therefore, any of the above units may be an integral part of the circuit or may be a software program running on the circuit.
  • An embodiment of the present application further provides a device for generating a face illumination image, and the structure of the device may be as shown in FIG. 1 .
  • the camera can be used to capture a face image
  • the processor 102 is configured to process the functions of parts S201 to S204 of the above-mentioned method for generating a face illumination image.
  • the processor 102 is used to process the face image, Rendering the normal vector map using at least one set of illumination parameters to obtain at least one illumination map; generating at least one face illumination image according to the at least one illumination map and the albedo map, etc., and using at least one face illumination image to detect the target Illumination estimation for face images, etc.
  • the above information output by the input/output interface 105 can be sent to the memory 101 for storage, for example, the above-mentioned face image, albedo map, normal vector map, illumination map, face illumination image and The target face image and the like are sent to the memory 101 .
  • the memory 101 can store the above-mentioned face image, albedo map, normal vector map, illumination map, face illumination image, target face image, and related instructions for configuring the processor, and the like.
  • the multimedia component 104 may include a camera, and the camera may be used to capture a face image and transmit the captured face image to the processor 102 .
  • An embodiment of the present application also provides an apparatus for generating a face illumination image.
  • the apparatus may include: a processor and a memory, where instructions are stored in the memory, and the processor executes the instructions in the memory to perform the following steps: receiving a face image, perform the relevant steps of the processor in the above-mentioned face illumination image generation method, such as performing the functions of S201 to S204, for example, the processor is used to process the face image, and use at least one set of illumination parameters to render the normal vector map to obtain at least An illumination map; according to at least one illumination map and albedo map, generate at least one face illumination image, etc.
  • An embodiment of the present application further provides an apparatus for generating a face illumination image
  • the apparatus may include: a processor and an interface, wherein the processor is configured to receive a face image through the interface, and perform processing as follows: executing The relevant steps of the processor in the above-mentioned face illumination image generation method, such as performing the functions of S201 to S204, for example, the processor is used to process the face image, and use at least one set of illumination parameters to render the normal vector map to obtain at least one. Illumination map; according to at least one illumination map and albedo map, generate at least one face illumination image, etc.
  • Embodiments of the present application further provide a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the computer-readable storage medium runs on a device (for example, the device may be a single-chip microcomputer, a chip, a computer, or a processor, etc.) , causing the device to perform one or more steps of S201-S204 of the above-mentioned face illumination image generation method.
  • a device for example, the device may be a single-chip microcomputer, a chip, a computer, or a processor, etc.
  • the embodiments of the present application also provide a computer program product containing instructions, and the technical solutions of the present application are essentially or part of the contribution to the prior art, or all or part of the technical solutions can be software products.
  • the computer software product is stored in a storage medium, and includes several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor therein to execute various embodiments of the present application all or part of the steps of the method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

本申请提供一种人脸光照图像生成装置及方法,涉及图像处理领域,用于降低人脸光照图像的获取难度和成本。装置包括:摄像头,用于拍摄人脸图像;处理器,用于:处理人脸图像,以得到反照率图和法向量图;使用至少一组光照参数渲染法向量图以得到至少一张照射图,其中至少一组光照参数中的每组光照参数用于渲染至少一张照射图中的一张照射图;根据至少一张照射图和反照率图生成至少一张人脸光照图像。

Description

一种人脸光照图像生成装置及方法 技术领域
本申请涉及图像处理领域,尤其涉及一种人脸光照图像生成装置及方法。
背景技术
随着计算机性能的提升和计算机视觉技术的发展,增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)在人们实际生活中的应用越来越多。为了提供更好的用户体验,提升AR/VR应用与真实场景的融合效果尤为重要,其中AR/VR物体和真实场景的光照一致性是评估融合效果的一项重要指标。人脸的几何形状具有显著的公共特征,常被用来作为光照估计的载体。目前,在光照估计时,通常需要大量的携带光照参数的人脸图像进行光照估计模型的训练,进而基于光照估计模型实现光照估计。但是,在实际应用过程中,带有光照参数的人脸图像却存在着采集难度大和成本高的问题。
发明内容
本申请提供一种人脸光照图像生成装置及方法,用于降低人脸光照图像的获取难度和成本。
为达到上述目的,本申请采用如下技术方案:
第一方面,提供一种人脸光照图像生成装置,该装置包括:摄像头,用于拍摄人脸图像并传输至处理器,比如,该人脸图像可以为RGB图像;处理器,用于:处理人脸图像,比如,使用本征分解算法分解人脸图像,以得到反照率图和法向量图;使用至少一组光照参数渲染法向量图(即在法向量图中表现出该组光照参数所对应的光照效果)以得到至少一张照射图,其中至少一组光照参数中的每组光照参数用于渲染至少一张照射图中的一张照射图;根据至少一张照射图和反照率图生成至少一张人脸光照图像(比如,将照射图和反照率图进行合并或融合以得到人脸光照图像),每张人脸光照图像的光照参数为人脸光照图像对应的照射图的光照参数。
上述技术方案中,通过拍摄真实的人脸图像,将该人脸图像分解为反照率图和法向量图,以及分别使用至少一组光照参数渲染法向量图,并根据渲染得到的至少一张照射图和反照率图对应生成至少一张人脸光照图像,从而通过一张人脸图像可以获取大量的带有光照参数的人脸光照图像,进而降低了人脸光照图像的获取难度和成本。此外,人脸图像是通过真实人脸拍摄得到的,这样可以使得基于人脸图像得到的人脸光照图像具有较好的实用性,进而根据人脸光照图像进行人脸光照估计时,可以进一步提高人脸估计的准确性。
在第一方面的一种可能的实现方式中,处理器还用于:裁剪人脸图像,比如,通过人脸检测和提取,以得到人脸图像中的人脸区域;使用本征分解算法分解人脸区域,以得到反照率图和法向量图。上述可能的实现方式,提供了一种简单有效地从人脸图像中获取反照率图和法向量图的方式。
在第一方面的一种可能的实现方式中,反照率图包括人脸图像中的人脸纹理信息, 反照率图可以是指人脸区域去除光照之后的图像,比如,该人脸纹理信息包括眼睛、眉毛、鼻子、耳朵和嘴巴等的纹理信息;法向量图包括人脸图像中的人脸几何形状信息,法向量图可以是指人脸区域的三维结构图,比如,该人脸几何形状信息包括眼睛、眉毛、鼻子、耳朵和嘴巴等的形状信息。
在第一方面的一种可能的实现方式中,处理器还用于:从光照信息数据库中的多组光照参数中选择至少一组光照参数,光照信息数据库包括多组光照参数;可选的,光照信息数据库包括:光照球谐数据库,或者环境贴图数据库;其中,光照球谐数据库中的多组光照参数包括多组光照球谐系数,每个光照球谐系数为一组光照参数,环境贴图数据库中的多组光照参数包括多张环境贴图,每张环境贴图为一组光照参数。上述可能的实现方式,可以提高光照参数的表现形式的多样性和灵活性。
在第一方面的一种可能的实现方式中,处理器还用于:根据预设光照方向或者预设光照强度从多组光照参数中选择至少一组光照参数。上述可能的实现方式,通过灵活地设置或选择预设光照方向或预设光照强度,可以得到满足实际需求的人脸光照图像,从而提高人脸光照图像的针对性。
在第一方面的一种可能的实现方式中,处理器还用于:根据至少一张人脸光照图像,训练用于对目标人脸图像进行光照估计神经网络模型;比如,至少一张人脸光照图像包括多张人脸光照图像,处理器可以利用神经网络训练这多张人脸光照图像,以得到用于光照估计的训练模型,并利用该训练模型对目标人脸图像进行光照估计,以得到目标图像的光照参数。上述可能的实现方式中,人脸图像是通过真实人脸拍摄得到的,这样可以使得基于人脸图像得到的只是一张人脸光照图像具有较好的实用性,进而根据人脸光照图像进行人脸光照估计时,可以进一步提高人脸估计的准确性。
第二方面,提供一种人脸光照图像生成装置,该装置包括:摄像单元,用于拍摄人脸图像;预处理单元,用于处理人脸图像,以得到反照率图和法向量图;光照迁移单元,还用于分别使用至少一组光照参数渲染法向量图以得到至少一张照射图,其中至少一组光照参数中的每组光照参数用于渲染所述至少一张照射图中的一张照射图;根据至少一张照射图和反照率图,生成至少一张人脸光照图像,每张人脸光照图像的光照参数为人脸光照图像对应的照射图的光照参数。
在第二方面的一种可能的实现方式中,预处理单元包括:人脸提取单元,裁剪人脸图像,以得到人脸图像中的人脸区域;本征分解单元,用于使用本征分解算法分解人脸区域,以得到反照率图和法向量图。
在第二方面的一种可能的实现方式中,反照率图包括人脸图像中的人脸纹理信息,法向量图包括人脸图像中的人脸几何形状信息。
在第二方面的一种可能的实现方式中,该装置还包括:存储单元,用于从光照信息数据库中的多组光照参数中选择至少一组光照参数,光照信息数据库包括多组光照参数。
在第二方面的一种可能的实现方式中,存储单元还用于:根据预设光照方向或者预设光照强度从多组光照参数中选择至少一组光照参数。
在第二方面的一种可能的实现方式中,光照信息数据库包括:光照球谐数据库或环境贴图数据库;其中,光照球谐数据库中的多组关照参数包括多组光照球谐系数, 每个光照球谐系数为一组光照参数,环境贴图数据库中的多组关照参数包括多张环境贴图,每张环境贴图为一组光照参数。
在第二方面的一种可能的实现方式中,该装置还包括:光照估计单元,用于根据至少一张人脸光照图像,训练用于对目标人脸图像进行光照估计的神经网络模型。
第三方面,提供一种人脸光照图像生成方法,该方法包括:拍摄人脸图像;处理人脸图像,以得到反照率图和法向量图;分别使用至少一组光照参数渲染法向量图,并根据渲染得到的至少一张照射图和反照率图,对应生成至少一张人脸光照图像,每张人脸光照图像的光照参数为人脸光照图像对应的照射图的光照参数。
在第三方面的一种可能的实现方式中,处理人脸图像,以得到反照率图和法向量图,包括:裁剪人脸图像,以得到人脸图像中的人脸区域;使用本征分解算法分解人脸区域,以得到反照率图和法向量图。
在第三方面的一种可能的实现方式中,反照率图包括人脸图像中的人脸纹理信息,法向量图包括人脸图像中的人脸几何形状信息。
在第三方面的一种可能的实现方式中,分别使用至少一组光照参数渲染法向量图之前,该方法还包括:从光照信息数据库中选择至少一组光照参数,光照信息数据库包括多组光照参数。
在第三方面的一种可能的实现方式中,从光照信息数据库中选择至少一组光照参数,包括:根据预设光照方向或者预设光照强度从多组光照参数中选择至少一组光照参数。
在第三方面的一种可能的实现方式中,光照信息数据库为以下中的一种:光照球谐数据库,环境贴图数据库;其中,光照球谐数据库中的多组光照参数包括多组光照球谐系数,每个光照球谐系数为一组光照参数,环境贴图数据库中的多组光照参数包括多张环境贴图,每张环境贴图为一组光照参数。
在第三方面的一种可能的实现方式中,该方法还包括:根据至少一张人脸光照图像,训练用于对目标人脸图像进行光照估计的神经网络模型。
第四方面,提供一种人脸光照图像生成装置,该装置包括:处理器和存储器,存储器中存储有指令,处理器运行存储器中的指令以执行如下操作:接收人脸图像;处理人脸图像,以得到反照率图和法向量图;使用至少一组光照参数渲染法向量图以得到至少一张照射图,其中至少一组光照参数中的每组光照参数用于渲染至少一张照射图中的一张照射图;根据至少一张照射图和反照率图,生成至少一张人脸光照图像。
在第四方面的一种可能的实现方式中,处理器还用于执行以下步骤:裁剪人脸图像,以得到人脸图像中的人脸区域;使用本征分解算法分解人脸区域,以得到反照率图和法向量图。
在第四方面的一种可能的实现方式中,反照率图包括人脸图像中的人脸纹理信息,法向量图包括人脸图像中的人脸几何形状信息。
在第四方面的一种可能的实现方式中,处理器还执行以下操作:从光照信息数据库中的多组光照参数中选择至少一组光照参数。
在第四方面的一种可能的实现方式中,处理器还执行以下操作:根据预设光照方向或者预设光照强度从多组光照参数中选择至少一组光照参数。
在第四方面的一种可能的实现方式中,光照信息数据库包括:光照球谐数据库,或者环境贴图数据库;其中,光照球谐数据库中的多组光照参数包括多组光照球谐系数,每个光照球谐系数为一组光照参数,环境贴图数据库中的多组光照参数包括多张环境贴图,每张环境贴图为一组光照参数。
在第四方面的一种可能的实现方式中,处理器还执行以下操作:根据至少一张人脸光照图像,训练用于对目标人脸图像进行光照估计的神经网络模型。
第五方面,提供一种人脸光照图像生成装置,该装置包括处理器和接口,其中,处理器被配置为通过所述接口接收人脸图像,并执行处理如下操作:处理人脸图像,以得到反照率图和法向量图;使用至少一组光照参数渲染法向量图以得到至少一张照射图,其中至少一组光照参数中的每组光照参数用于渲染至少一张照射图中的一张照射图;根据至少一张照射图和反照率图,生成至少一张人脸光照图像。
在第五方面的一种可能的实现方式中,处理器还执行以下操作:裁剪人脸图像,以得到人脸图像中的人脸区域;使用本征分解算法分解人脸区域,以得到反照率图和法向量图。
在第五方面的一种可能的实现方式中,反照率图包括人脸图像中的人脸纹理信息,法向量图包括人脸图像中的人脸几何形状信息。
在第五方面的一种可能的实现方式中,处理器还执行以下操作:从光照信息数据库中的多组光照参数中选择至少一组光照参数。
在第五方面的一种可能的实现方式中,处理器还执行以下操作:根据预设光照方向或者预设光照强度从多组光照参数中选择至少一组光照参数。
在第五方面的一种可能的实现方式中,光照信息数据库包括:光照球谐数据库,或者环境贴图数据库;其中,光照球谐数据库中的多组光照参数包括多组光照球谐系数,每个光照球谐系数为一组光照参数,环境贴图数据库中的多组光照参数包括多张环境贴图,每张环境贴图为一组光照参数。
在第五方面的一种可能的实现方式中,处理器还执行以下操作:根据至少一张人脸光照图像,训练用于对目标人脸图像进行光照估计的神经网络模型。
本申请的又一方面,提供一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得所述计算机执行上述第三方面或第三方面的任一种可能的实现方式所提供的方法。
本申请的又一方面,提供一种计算机程序产品,当计算机程序产品在计算机上运行时,使得该计算机执行上述第三方面或第三方面的任一种可能的实现方式所提供的方法。
附图说明
图1为本申请实施例提供的一种图像处理设备的结构示意图;
图2为本申请实施例提供的一种人脸光照图像生成方法的流程示意图;
图3为本申请实施例提供的一种人脸区域裁剪的示意图;
图4为本申请实施例提供的一种本征分解人脸区域的示意图;
图5为本申请实施例提供的一种至少一张人脸光照图像的示意图;
图6为本申请实施例提供的另一种人脸光照图像生成方法的流程示意图;
图7为本申请实施例提供的一种目标人脸图像的光照估计的示意图;
图8为本申请实施例提供的一种人脸光照图像生成装置的结构示意图。
具体实施方式
本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。另外,在本申请的实施例中,“第一”、“第二”等字样并不对数量和执行次序进行限定。
图1为本申请实施例提供的一种图像处理设备的结构示意图,该图像处理设备可以为手机、平板电脑、摄像机、照相机、计算机、可穿戴设备、车载设备或便携式设备等。为方便描述,本申请中将上面提到的设备或者内置芯片系统的上述设备统称为图像处理设备。本申请实施例以该图像处理设备为手机为例进行说明,该手机或者内置于手机的芯片系统包括:存储器101、处理器102、传感器组件103、多媒体组件104以及输入/输出接口105。下面结合图1对手机或者内置于手机的芯片系统的各个构成部件进行具体的介绍。
存储器101可用于存储数据、软件程序以及模块;主要包括存储程序区和存储数据区,其中,存储程序区可存储软件程序,包括以代码形成的指令,包括但不限于操作系统、至少一个功能所需的应用程序,比如声音播放功能、图像播放功能等;存储数据区可存储根据手机的使用所创建的数据,比如音频数据、图像数据、电话本等。在本申请实施例中,存储器101可用于存储人脸图像、光照信息数据库和待评估图像等。在一些可行的实施例中,可以有一个存储器,也可以有多个存储器;该存储器可以包括软盘,硬盘如内置硬盘和移动硬盘,磁盘,光盘,磁光盘如CD_ROM、DCD_ROM,非易失性存储设备如RAM、ROM、PROM、EPROM、EEPROM、闪存、或者技术领域内所公知的任意其他形式的存储介质。
处理器102是手机的控制中心,利用各种接口和线路连接整个设备的各个部分,通过运行或执行存储在存储器101内的软件程序和/或软件模块,以及调用存储在存储器101内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。在本申请实施例中,处理器102可用于执行本申请方法实施例中的一个或者多个步骤,比如,处理器102可用于执行下述方法实施例中的S202至S204中的一个或者多个步骤。在一些可行的实施例中,处理器102可以是单处理器结构、多处理器结构、单线程处理器以及多线程处理器等;在一些可行的实施例中,处理器102可以包括中央处理器单元、通用处理器、数字信号处理器、神经网络处理器、图像处理单元、图像信号处理器、微控制器或微处理器等的至少一个。除此以外,处理器102还可进一步包括其他硬件电路或加速器,如专用集成电路、现场可编程门阵列或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器102也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,数字信号处理器和微处理器的组合等等。
传感器组件103包括一个或多个传感器,用于为手机提供各个方面的状态评估。其中,传感器组件103可以包括光传感器,如CMOS或CCD图像传感器,用于在成 像应用中使用,即成为相机或摄像头的组成部分。在本申请实施例中,传感器组件103可用于支持多媒体组件104中的摄像头获取人脸图像等。此外,传感器组件103还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器,通过传感器组件103可以检测到手机的加速/减速、方位、打开/关闭状态,组件的相对定位,或手机的温度变化等。
多媒体组件104在手机和用户之间提供一个输出接口的屏幕,该屏幕可以为触摸面板,且当该屏幕为触摸面板时,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。此外,多媒体组件104还包括至少一个摄像头,比如,多媒体组件104包括一个前置摄像头和/或后置摄像头。当手机处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以感应外部的多媒体信号,该信号被用于形成图像帧。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。在本申请实施例中,多媒体组件104中的摄像头可用于支持获取人脸图像等。
输入/输出接口105为处理器102和外围接口模块之间提供接口,比如,外围接口模块可以包括键盘、鼠标、或USB(通用串行总线)设备等。在本申请实施例中,输入接口可用于获取待评估图像和人脸图像等;输出接口可用于待评估图像的光照参数等。在一种可能的实现方式中,输入/输出接口105可以只有一个输入/输出接口,也可以有多个输入/输出接口。
尽管未示出,手机还可以包括音频组件和通信组件等,比如,音频组件包括麦克风,通信组件包括无线保真(wireless fidelity,WiFi)模块、蓝牙模块等,本申请实施例在此不再赘述。本领域技术人员可以理解,图1中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
图2为本申请实施例提供的一种人脸光照图像生成方法的流程示意图,该方法可应用于包括摄像头和处理器的图像处理设备中,比如,该图像处理设备可以为图1所示的图像处理设备,参见图2,该方法包括以下几个步骤。
S201:拍摄人脸图像。其中,图像处理设备可以包括一个或者多个摄像头,这一个或者多个摄像头可以包括前置摄像头和后置摄像头,且这一个或者多个摄像头可以为单目摄像头或者双目摄像头。这一个或者多个摄像头中的任一摄像头均可以用于拍摄人脸图像,并可以将拍摄得到的人脸图像传输给该图像处理设备中的处理器,该人脸图像可以是包括红(red,R)、绿(green,G)和蓝(blue,B)三个通道的RGB图像或者BGR图像,也可以是YUV(Y表示亮度、U和V表示色差)等其他格式的图像。该人脸图像可以是人脸的正面图像或者侧面图像等。
S202:处理人脸图像,以得到反照率图和法向量图。其中,反照率图和法向量图可以与该人脸图像中人脸区域对应,该人脸区域可以是人脸图像中的部分区域或者全部区域,当人脸区域是人脸图像中的部分区域时,该人脸区域至少包括人脸图像中的人脸部分。反照率(albedo)图可以是指人脸区域去除光照之后的图像,即反照率图 包括人脸区域的人脸纹理信息,比如,该人脸纹理信息包括眼睛、眉毛、鼻子、耳朵和嘴巴等的纹理信息。第一法向量(normal)图可以是指人脸区域的三维结构图,即法向量图包括人脸区域的人脸几何形状信息,比如,该人脸几何形状信息包括眼睛、眉毛、鼻子、耳朵和嘴巴等的形状信息。
具体的,当该人脸区域是人脸图像中的部分区域时,该图像处理设备中的处理器在接收到该人脸图像时,可以裁剪该人脸图像,比如,如图3所示,处理器可以通过人脸框检测算法检测该人脸图像中的人脸区域并裁剪,以得到该人脸区域,这里的人脸框检测算法可以检测出人脸区域的位置。之后,处理器可以使用本征分解算法分解该人脸区域,以得到反照率图和法向量图,比如,如图4所示,该本征分解算法可以包括一个编码器(encoder)、两个特征提取模块(f A和f N)和两个解码器(decoder),编码器用于提取该人脸区域浅层的公共特征(即反照率图和法向量图都包括的特征),特征提取模块f A用于提取反照率图包括的特征,特征提取模块f N用于提取法向量图包括的特征,两个特征提取模块(f A和f N)提取的特征分别经过一个解码器解码后,即可得到反照率图和法向量图。
当该人脸区域是人脸图像的全部区域时,处理器在接收到该人脸图像时,可直接分解该人脸图像,以得到反照率图和法向量图。其中,分解该人脸图像的具体过程与上述分解人脸区域的过程一致,具体参见上述相关描述,本申请实施例在此不再赘述。
需要说明的是,关于上述人脸框检测算法和本征分解算法的详细描述可以参考相关现有技术中的描述,本申请实施例对此不作详细阐述。另外,本申请实施例中的反照率图可用于表示图像的反照率,法向量图可用于表示图像的法向量,即将人脸图像的反照率和法向量分别通过图像的形式来呈现。
S203:使用至少一组光照参数渲染法向量图以得到至少一张照射图(shading image),并根据至少一张照射图和反照率图,生成至少一张人脸光照图像。其中,至少一组光照参数中的每组光照参数用于渲染至少一张照射图中的一张照射图,每张人脸光照图像的光照参数为该人脸光照图像对应的照射图的光照参数。
其中,至少一组光照参数可以包括一组或者多组光照参数,每组光照参数可以包括多个光照参数,在实际应用中,每组光照参数具体可以通过光照球谐系数或者环境贴图的形式来表征。比如,一组光照参数具体为一组光照球谐系数,该组光照球谐系数包括27个球谐系数,其中R、G、B三个通道中每个通道均对应9个球谐系数。
具体的,对于至少一组光照参数中的每组光照参数,该图像处理设备中的处理器可以使用该组光照参数渲染法向量图,即在法向量图中表现出该组光照参数所对应的光照效果,以得到一张照射图,该张照射图的光照参数即为该组光照参数。之后,根据该张照射图和该反照率图对应生成一张人脸光照图像,即将该张照射图和该反照率图进行合并以得到人脸光照图像,该张人脸光照图像的光照参数即为该张照射图的光照参数。因此,上述至少一组光照参数可以对应生成至少一张人脸光照图像。
比如,假设该人脸图像为RGB图像,该组光照参数为一组二阶的光照球谐系数,处理器可以采用基于朗伯假设的光照方程(如下公式(1)所示),使用该组光照球谐系数渲染法向量图,以得到照射图;之后,根据如下公式(2)将该张照射图和该反照率图合并以得到人脸光照图像。式中,Shading(R/G/B)表示照射图中任一通道(比如, 表示照射图的R通道、G通道或者B通道)的图像,L lm表示该通道对应的二阶的光照球谐系数,Y lm表示二阶球谐基(该二阶球谐基是根据法向量图计算得到的),l和m的取值为整数,Image表示人脸光照图像,Albedo表示反照率图,Shading(R,G,B)表示三个通道(即R、G、B)构成的照射图。
Shading(R/G/B)=∑ lmL lmY lm   (1)
Image=Shading(R,G,B)*Albedo   (2)
比如,上述Y lm包括Y 0,0、Y 1,0、Y 0,-1、Y 0,1、Y 2,0、Y 2,-1、Y 2,1、Y 2,-2和Y 2,2,具体取值如下所示。这里Y lm中的x、y、z分别表示法向量图中每个像素的R、G、B的值。需要说明的是,上述Y lm、Shading、Image和Albedo均用于表示矩阵,且矩阵的尺寸与人脸图像的尺寸一致。
Figure PCTCN2020102222-appb-000001
Figure PCTCN2020102222-appb-000002
示例性的,假设至少一组光照参数包括多组光照球谐系数、法向量图和反照率图如图4所示,则根据上述公式(1)和公式(2)对应生成的至少一张人脸光照图像可如图5所示。
可选的,在使用至少一组光照参数渲染法向量图之前,该方法还可以包括:从光照信息数据库中选择至少一组光照参数,该光照信息数据库包括多组光照参数。
其中,该光照信息数据库可以存储在图像处理设备,具体如存储器中,且该光照信息数据库包括大量的组光照参数。比如,该光照信息数据库可以为该光照球谐数据库,该光照球谐数据库可以包括数十万组的光照球谐系数,每组光照球谐系数对应一组光照参数;在实际应用中,该光照球谐数据库可以是二阶的,也可以是三阶的,本申请实施例对此不作具体限制。或者,该光照信息数据库还可以为环境贴图(environment map)数据库,该环境贴图数据库包括大量的环境贴图,每张环境贴图可以对应一组光照参数。可选的,该光照信息数据库可以是动态变化的,比如,该图像处理设备中的处理器可以周期性地或者非周期地更新该光照信息数据库,以保证该光照信息数据库的新鲜性。需要说明的是,根据环境贴图渲染法向量图得到照射图的相关过程可以参见相关现有技术中的描述,本申请实施例对此不作详细阐述。
具体的,该图像处理设备中的处理器从光照信息数据库中选择至少一组光照参数时,可以随机选择至少一组光照参数,或者根据预设光照方向选择至少一组光照参数,或者根据预设光照强度选择至少一组光照参数。其中,预设光照方向和预设光照强度可以是事先设置的,且对于不同的人脸图像可以设置相同或不同的预设光照方向和预设光照强度。
进一步的,如图6所示,在S203之后,该方法还可以包括:S204:根据至少一张人脸光照图像,对目标人脸图像进行光照估计。具体的,当得到至少一张人脸光照图像时,处理器可以根据这至少一张人脸光照图像,对目标人脸图像进行光照估计,比如,至少一张人脸光照图像包括多张人脸光照图像,处理器可以利用神经网络训练这多张人脸光照图像,以得到用于光照估计的训练模型,即神经网络模型,也简称为模 型,并利用该训练模型对目标人脸图像进行光照估计,以得到目标图像的光照参数。比如,如图7所示,当目标人脸图像包括不同光照方向的多张图像时,利用该训练模型对这多张图像进行光照估计,得到的每张图像的光照参数可以通过可视化球体来表示。图7中以目标人脸图像包括3张图像,且这3张图像的光照方向依次为上方、右上方和右下方为例进行说明。可以理解,利用多个图像训练型的过程和利用训练后的模型对目标人脸图像进行光照估计的过程可以是分离的两个过程,本实施例对此不限定。关于如何训练并生成一个神经网络模型或如何使用一个神经网络模型对数据进行估计或预测,可参照现有技术中的描述,本实施例不做详细展开。
在本申请实施例中,该图像处理设备可以拍摄真实的人脸图像,将该人脸图像中的人脸区域分解为反照率图和法向量图,以及分别使用至少一组光照参数渲染法向量图,并根据渲染得到的至少一张照射图和反照率图对应生成至少一张人脸光照图像,从而通过一张人脸图像可以获取大量的带有光照参数的人脸光照图像,进而降低了人脸光照图像的获取难度和成本。此外,人脸图像是通过真实人脸拍摄得到的,这样可以使得基于人脸图像得到的人脸光照图像具有较好的实用性,进而根据人脸光照图像进行人脸光照估计时,可以进一步提高人脸估计的准确性。
上述主要从图像处理设备的角度对本申请实施例提供的图像处理方法进行了介绍。可以理解的是,该图像处理设备为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的结构及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例对应的人脸光照图像生成装置进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,图8示出了上述实施例中所涉及的人脸光照图像生成装置的一种可能的结构示意图,该装置可以为图像处理设备或者图像处理设备内置的芯片。该装置包括:摄像单元301、预处理单元302和光照迁移单元303;可选的,该装置还包括:存储单元304和/或光照估计单元305。其中,摄像单元301用于支持该装置执行方法实施例中的S201;预处理单元302用于支持该装置执行方法实施例中的S202;光照迁移单元303用于执行该装置执行方法实施例中的S203,和/或用于本文所描述的技术的其他信号处理过程。进一步的,存储单元304可用于存储光照信息数据库,以及从光照信息数据库中选择至少一组光照参数;光照估计单元305用于执行该装置执行方法实施例中的S204。可选的,预处理单元302可以包括用于裁剪人脸区域的人脸提取单元3021,以及用于本征分解处理的本征分解单元3022,具体可参照上述实施例的介绍。
在一种实现方案中,上述任一单元,例如预处理单元302、光照迁移单元303和 光照估计单元305可以通过软件形式实现,比如这三个单元所对应的软件程序包括在存储器中,处理器通过运行存储器中包括的软件程序以实现各单元对应的功能。在另一种实现方案中,上述预处理单元302、光照迁移单元303和光照估计单元305也可以通过硬件形式实现,比如这三个单元可以是处理器中包括的硬件电路或加速器或者被直接用于取代处理器,这三个单元中的每个具体可由硬件电路或加速器实现,可包括电子线路、数字电路、逻辑电路、或模拟电路中至少一种。
上面从模块化功能实体的角度对本申请实施例中的一种人脸光照图像生成装置进行描述,下面从硬件实体的角度对本申请实施例中的一种人脸光照图像生成装置进行描述。上述摄像单元301可以对应硬件实体中的摄像头或摄像头的电路接口,预处理单元302、光照迁移单元303和光照估计单元305可以对应硬件实体中的处理器,存储单元304可以对应硬件实体中的存储器。因此,以上任一单元可以是电路的一个组成部分或者也可以是运行于电路上的软件程序。
本申请实施例还提供的一种人脸光照图像生成装置,该装置的结构可以如图1所示。在本申请实施例中,摄像头可用于拍摄人脸图像,处理器102被配置为可处理上述人脸光照图像生成方法的S201至S204部分的功能,比如,处理器102用于处理人脸图像,使用至少一组光照参数渲染法向量图以得到至少一张照射图;根据至少一张照射图和反照率图,生成至少一张人脸光照图像等,以及利用至少一种人脸光照图像对目标人脸图像进行光照估计等。
在一些可行的实施例中,该输入/输出接口105输出的以上信息可以送到存储器101中存储,比如,将上述人脸图像、反照率图、法向量图、照射图、人脸光照图像和目标人脸图像等送到存储器101中。存储器101可存储上述人脸图像、反照率图、法向量图、照射图、人脸光照图像、目标人脸图像以及配置处理器的相关指令等。多媒体组件104中可以包括摄像头,摄像头可用于拍摄人脸图像,并将拍摄的人脸图像传输给处理器102。
本申请实施例还提供的一种人脸光照图像生成装置,该装置可以包括:处理器和存储器,该存储器中存储有指令,该处理器运行该存储器中的指令以执行如下步骤:接收人脸图像,执行上述人脸光照图像生成方法中处理器的相关步骤,比如执行S201至S204部分的功能,比如,处理器用于处理人脸图像,使用至少一组光照参数渲染法向量图,以得到至少一张照射图;根据至少一张照射图和反照率图,生成至少一张人脸光照图像等。
本申请实施例还提供的一种人脸光照图像生成装置,该装置可以包括:处理器和接口,其中,该处理器被配置为通过所述接口接收人脸图像,并执行处理如下操作:执行上述人脸光照图像生成方法中处理器的相关步骤,比如执行S201至S204部分的功能,比如,该处理器用于处理人脸图像,使用至少一组光照参数渲染法向量图,以得到至少一张照射图;根据至少一张照射图和反照率图,生成至少一张人脸光照图像等。
本申请实施例提供的上述人脸光照图像生成装置的各组成部分分别用于实现相对应的前述人脸光照图像生成方法的各步骤的功能,由于在前述的人脸光照图像生成方法实施例中,已经对各步骤进行了详细说明,在此不再赘述。
本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当其在一个设备(比如,该设备可以是单片机,芯片、计算机或处理器等)上运行时,使得该设备执行上述人脸光照图像生成方法的S201-S204中的一个或多个步骤。上述装置的各组成模块如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在所述计算机可读取存储介质中。
基于这样的理解,本申请实施例还提供一种包含指令的计算机程序产品,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或其中的处理器执行本申请各个实施例所述方法的全部或部分步骤。
最后应说明的是:以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (21)

  1. 一种人脸光照图像生成装置,其特征在于,所述装置包括:
    摄像头,用于拍摄人脸图像;
    处理器,用于:
    处理所述人脸图像,以得到反照率图和法向量图;
    使用至少一组光照参数渲染所述法向量图以得到至少一张照射图,其中所述至少一组光照参数中的每组光照参数用于渲染所述至少一张照射图中的一张照射图;
    根据所述至少一张照射图和所述反照率图,生成至少一张人脸光照图像。
  2. 根据权利要求1所述的装置,其特征在于,所述处理器还用于:
    裁剪所述人脸图像,以得到所述人脸图像中的人脸区域;
    使用本征分解算法分解所述人脸区域,以得到反照率图和法向量图。
  3. 根据权利要求1或2所述的装置,其特征在于,所述反照率图包括所述人脸图像中的人脸纹理信息,所述法向量图包括所述人脸图像中的人脸几何形状信息。
  4. 根据权利要求1-3任一项所述的装置,其特征在于,所述处理器还用于:
    从光照信息数据库中的多组光照参数中选择所述至少一组光照参数。
  5. 根据权利要求4所述的装置,其特征在于,所述处理器还用于:
    根据预设光照方向或者预设光照强度从所述多组光照参数中选择所述至少一组光照参数。
  6. 根据权利要求4或5所述的装置,其特征在于,所述光照信息数据库包括:光照球谐数据库或环境贴图数据库;
    其中,所述光照球谐数据库中的所述多组光照参数包括多组光照球谐系数,所述环境贴图数据库中的所述多组光照参数包括多张环境贴图。
  7. 根据权利要求1-6任一项所述的装置,其特征在于,所述处理器还用于:
    根据所述至少一张人脸光照图像,训练用于对目标人脸图像进行光照估计的神经网络模型。
  8. 一种人脸光照图像生成装置,其特征在于,所述装置包括:
    摄像单元,用于拍摄人脸图像;
    预处理单元,用于处理所述人脸图像,以得到反照率图和法向量图;
    光照迁移单元,用于使用至少一组光照参数渲染所述法向量图以得到至少一张照射图,其中所述至少一组光照参数中的每组光照参数用于渲染所述至少一张照射图中的一张照射图;
    根据至少一张照射图和所述反照率图,生成至少一张人脸光照图像。
  9. 根据权利要求8所述的装置,其特征在于,所述预处理单元包括:
    人脸提取单元,裁剪所述人脸图像,以得到所述人脸图像中的人脸区域;
    本征分解单元,用于使用本征分解算法分解所述人脸区域,以得到反照率图和法向量图。
  10. 根据权利要求8或9所述的装置,其特征在于,所述反照率图包括所述人脸图像中的人脸纹理信息,所述法向量图包括所述人脸图像中的人脸几何形状信息。
  11. 根据权利要求8-10任一项所述的装置,其特征在于,所述装置还包括:
    存储单元,用于从光照信息数据库中的多组光照参数中选择所述至少一组光照参数,所述光照信息数据库包括多组光照参数。
  12. 根据权利要求11所述的装置,其特征在于,所述存储单元还用于:
    根据预设光照方向或者预设光照强度从所述多组光照参数中选择所述至少一组光照参数。
  13. 根据权利要求11或12所述的装置,其特征在于,所述光照信息数据库包括:光照球谐数据库或环境贴图数据库;
    其中,所述光照球谐数据库中的所述多组关照参数包括多组光照球谐系数,所述环境贴图数据库中的所述多组关照参数包括多张环境贴图。
  14. 根据权利要求8-13任一项所述的装置,其特征在于,所述装置还包括:
    光照估计单元,用于根据所述至少一张人脸光照图像,训练用于对目标人脸图像进行光照估计的神经网络模型。
  15. 一种人脸光照图像生成方法,其特征在于,所述方法包括:
    拍摄人脸图像;
    处理所述人脸图像,以得到反照率图和法向量图;
    使用至少一组光照参数渲染所述法向量图以得到至少一张照射图,其中所述至少一组光照参数中的每组光照参数用于渲染所述至少一张照射图中的一张照射图;
    根据至少一张照射图和所述反照率图,生成至少一张人脸光照图像。
  16. 根据权利要求15所述的方法,其特征在于,所述处理所述人脸图像,以得到反照率图和法向量图,包括:
    裁剪所述人脸图像,以得到所述人脸图像中的人脸区域;
    使用本征分解算法分解所述人脸区域,以得到反照率图和法向量图。
  17. 根据权利要求15或16所述的方法,其特征在于,所述反照率图包括所述人脸图像中的人脸纹理信息,所述法向量图包括所述人脸图像中的人脸几何形状信息。
  18. 根据权利要求15-17任一项所述的方法,其特征在于,所述使用至少一组光照参数渲染所述法向量图以得到至少一张照射图之前,所述方法还包括:
    从光照信息数据库中的多组光照参数中选择所述至少一组光照参数,所述光照信息数据库包括多组光照参数。
  19. 根据权利要求18所述的方法,其特征在于,所述从光照信息数据库中选择所述至少一组光照参数,包括:
    根据预设光照方向或者预设光照强度从所述多组光照参数中选择所述至少一组光照参数。
  20. 根据权利要求18或19所述的方法,其特征在于,所述光照信息数据库包括:光照球谐数据库或环境贴图数据库;
    其中,所述光照球谐数据库中的所述多组关照参数包括多组光照球谐系数,所述环境贴图数据库中的所述多组关照参数包括多张环境贴图。
  21. 根据权利要求15-20任一项所述的方法,其特征在于,所述方法还包括:
    根据所述至少一张人脸光照图像,训练用于对目标人脸图像进行光照估计的神经 网络模型。
PCT/CN2020/102222 2020-07-15 2020-07-15 一种人脸光照图像生成装置及方法 WO2022011621A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/102222 WO2022011621A1 (zh) 2020-07-15 2020-07-15 一种人脸光照图像生成装置及方法
CN202080005608.7A CN114207669A (zh) 2020-07-15 2020-07-15 一种人脸光照图像生成装置及方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/102222 WO2022011621A1 (zh) 2020-07-15 2020-07-15 一种人脸光照图像生成装置及方法

Publications (1)

Publication Number Publication Date
WO2022011621A1 true WO2022011621A1 (zh) 2022-01-20

Family

ID=79555964

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/102222 WO2022011621A1 (zh) 2020-07-15 2020-07-15 一种人脸光照图像生成装置及方法

Country Status (2)

Country Link
CN (1) CN114207669A (zh)
WO (1) WO2022011621A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117649478B (zh) * 2024-01-29 2024-05-17 荣耀终端有限公司 模型训练方法、图像处理方法及电子设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446768A (zh) * 2015-08-10 2017-02-22 三星电子株式会社 用于脸部识别的方法和设备
CN107506714A (zh) * 2017-08-16 2017-12-22 成都品果科技有限公司 一种人脸图像重光照的方法
WO2018102700A1 (en) * 2016-12-01 2018-06-07 Pinscreen, Inc. Photorealistic facial texture inference using deep neural networks
CN108805970A (zh) * 2018-05-03 2018-11-13 百度在线网络技术(北京)有限公司 光照估计方法及装置
CN109410309A (zh) * 2018-09-30 2019-03-01 深圳市商汤科技有限公司 重光照方法和装置、电子设备和计算机存储介质
CN109427080A (zh) * 2017-08-31 2019-03-05 爱唯秀股份有限公司 快速生成大量复杂光源人脸图像的方法
CN110428491A (zh) * 2019-06-24 2019-11-08 北京大学 基于单帧图像的三维人脸重建方法、装置、设备及介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446768A (zh) * 2015-08-10 2017-02-22 三星电子株式会社 用于脸部识别的方法和设备
WO2018102700A1 (en) * 2016-12-01 2018-06-07 Pinscreen, Inc. Photorealistic facial texture inference using deep neural networks
CN107506714A (zh) * 2017-08-16 2017-12-22 成都品果科技有限公司 一种人脸图像重光照的方法
CN109427080A (zh) * 2017-08-31 2019-03-05 爱唯秀股份有限公司 快速生成大量复杂光源人脸图像的方法
CN108805970A (zh) * 2018-05-03 2018-11-13 百度在线网络技术(北京)有限公司 光照估计方法及装置
CN109410309A (zh) * 2018-09-30 2019-03-01 深圳市商汤科技有限公司 重光照方法和装置、电子设备和计算机存储介质
CN110428491A (zh) * 2019-06-24 2019-11-08 北京大学 基于单帧图像的三维人脸重建方法、装置、设备及介质

Also Published As

Publication number Publication date
CN114207669A (zh) 2022-03-18

Similar Documents

Publication Publication Date Title
JP7408678B2 (ja) 画像処理方法およびヘッドマウントディスプレイデバイス
CN108594997B (zh) 手势骨架构建方法、装置、设备及存储介质
CN113205568B (zh) 图像处理方法、装置、电子设备及存储介质
CN110544272A (zh) 脸部跟踪方法、装置、计算机设备及存储介质
CN112287852B (zh) 人脸图像的处理方法、显示方法、装置及设备
CN112257552B (zh) 图像处理方法、装置、设备及存储介质
CN110335224B (zh) 图像处理方法、装置、计算机设备及存储介质
US20240221296A1 (en) Inferred Shading
EP3571670B1 (en) Mixed reality object rendering
CN108701355A (zh) Gpu优化和在线基于单高斯的皮肤似然估计
CN114445562A (zh) 三维重建方法及装置、电子设备和存储介质
CN112562056A (zh) 虚拟演播室中虚拟灯光的控制方法、装置、介质与设备
CN212112404U (zh) 头戴式设备及用于共享数据的系统
CN110488489B (zh) 用于头戴式壳体的眼睛登记
WO2022011621A1 (zh) 一种人脸光照图像生成装置及方法
CN109726613A (zh) 一种用于检测的方法和装置
CN115908120B (zh) 图像处理方法和电子设备
CN117095096A (zh) 个性化人体数字孪生模型创建方法、在线驱动方法及装置
CN114241127A (zh) 全景图像生成方法、装置、电子设备和介质
CN112950641A (zh) 图像处理方法及装置、计算机可读存储介质和电子设备
CN114339029A (zh) 拍摄方法、装置及电子设备
CN112950516A (zh) 图像局部对比度增强的方法及装置、存储介质及电子设备
US20220108559A1 (en) Face detection in spherical images
CN112767453A (zh) 人脸跟踪方法、装置、电子设备及存储介质
US20210297649A1 (en) Image data output device, content creation device, content reproduction device, image data output method, content creation method, and content reproduction method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20945674

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20945674

Country of ref document: EP

Kind code of ref document: A1