WO2021218649A1 - 图像生成方法及装置 - Google Patents

图像生成方法及装置 Download PDF

Info

Publication number
WO2021218649A1
WO2021218649A1 PCT/CN2021/087574 CN2021087574W WO2021218649A1 WO 2021218649 A1 WO2021218649 A1 WO 2021218649A1 CN 2021087574 W CN2021087574 W CN 2021087574W WO 2021218649 A1 WO2021218649 A1 WO 2021218649A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
image
shooting target
lighting
images
Prior art date
Application number
PCT/CN2021/087574
Other languages
English (en)
French (fr)
Inventor
王凤霞
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21796787.6A priority Critical patent/EP4131933A4/en
Priority to US17/922,246 priority patent/US20230177768A1/en
Publication of WO2021218649A1 publication Critical patent/WO2021218649A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes

Definitions

  • This application relates to the technical field of terminal equipment, and in particular to an image generation method and device.
  • multiple camera devices may be provided on the terminal equipment, and the multiple camera devices may include a front camera, and the user can use the front camera to take a selfie.
  • the multiple camera devices may include a front camera, and the user can use the front camera to take a selfie.
  • the three-dimensional effect and skin details of the Selfie images are usually missing, there are more noise, and the user experience is poor.
  • the display or built-in fill light of the terminal device is usually supplemented by the display or built-in fill light of the terminal device.
  • the terminal device can detect the ambient light brightness of the shooting environment. When it is less than the set brightness threshold, you can fill in the light by lighting the display screen of the terminal device or turning on the built-in fill light to improve the image shooting effect.
  • the light is supplemented through the display or built-in light, the face can only receive light partially, and the brightness is limited. The stereoscopic effect and skin details of the self-portrait image will still be missing, and the self-portrait effect is still poor.
  • the light can be supplemented by an external light supplement device.
  • This application provides an image generation method and device to solve the problem of how to simply and efficiently eliminate the effects of insufficient light on the skin details and three-dimensional perception in a self-portrait image.
  • the present application provides an image generation method, the method includes: acquiring a first image of a shooting target and a first ambient light angle; the first image is a three-dimensional image of the shooting target; the first The ambient light angle is used to indicate the relative positional relationship between the illumination light source in the shooting environment and the shooting target when the first image is taken; to obtain a first 3D model; the first 3D model is based on the shooting target Depth information, multiple second images, and first lighting information are fused to generate a 3D model of the shooting target; the multiple second images are obtained by shooting the shooting target from multiple angles of the shooting target
  • the first lighting information includes a first lighting angle and a first light intensity, the first lighting angle is equal to the first ambient light angle, and the brightness corresponding to the first light intensity is greater than Equal to a preset brightness threshold; according to the first image and the first 3D model, a third image of the shooting target is generated by fusion.
  • the first image of the shooting target and the first ambient light angle corresponding to the first image can be obtained, and the depth information according to the shooting target, multiple second images, and sufficient lighting according to the brightness can be obtained.
  • the first 3D model is generated by fusion of the lighting information of the light source lighting the shooting target from the first ambient light angle, and then a third image of the shooting target can be generated according to the first image and the first 3D model. That is to say, with this technical solution, when a user uses a terminal device to take a selfie in a dark environment, the terminal device can use depth information according to the shooting target, multiple second images, and a lighting source with sufficient brightness from the first environment.
  • the 3D model generated by the light angle of the lighting information of the shooting target is fused with the actual three-dimensional image, so that the actual self-portrait image has a better stereo and detail effect, and does not produce a stereo feeling and skin details.
  • the missing phenomenon makes the user experience better.
  • acquiring a first 3D model includes: acquiring a preset second 3D model; the second 3D model is based on the depth information and the A 3D model of the shooting target generated by fusion of a plurality of second images; acquiring the first lighting information; fusing the first lighting information and the second 3D model to generate the first 3D model.
  • the preset second 3D model and the first lighting information can be acquired, and then the first 3D model is generated by fusing the first lighting information and the second 3D model, and the first 3D model obtained is more accurate.
  • the three-dimensional sense and skin details of the image generated by shooting are more abundant, and the user experience is better.
  • the method further includes: acquiring the depth information; acquiring the multiple second images; according to the depth information and the multiple first images The two images are fused to generate the second 3D model.
  • the second 3D model of the shooting target is generated according to the depth information of the shooting target and the two-dimensional information of the shooting target obtained from multiple angles, and then the second 3D model of the shooting target is generated according to the lighting information corresponding to the first ambient light angle and the second 3D
  • the model generates the first 3D model, and the obtained first 3D model is more accurate, the three-dimensional sense and skin details of the images generated by subsequent shooting are more abundant, and the user experience is better.
  • the acquiring the first 3D model includes: acquiring the first 3D model from a plurality of preset third 3D models;
  • a third 3D model is a 3D model of a plurality of the shooting targets generated according to the depth information, the plurality of second images, and a plurality of second lighting information; each of the second lighting information includes one A different second illumination angle;
  • the first 3D model is a third 3D model corresponding to a second illumination angle that is the same as the first ambient light angle.
  • the first 3D model can be obtained from multiple preset third 3D models according to the angle matching method. Using the method of this implementation, the first 3D model can be obtained simply and quickly, so as to quickly obtain Images with richer three-dimensionality and skin details have better applicability.
  • acquiring the first image of the shooting target and the first ambient light angle includes: acquiring the ambient light brightness of the shooting target; if the ambient light brightness Less than the preset brightness threshold, acquiring the first image of the shooting target and the first ambient light angle.
  • the ambient light brightness of the shooting target is acquired first; then, when the ambient light brightness is less than the preset brightness threshold, the first image of the shooting target and the first ambient light angle are acquired. That is to say, in the technical solution of this implementation, only when the ambient light brightness is less than the preset brightness threshold, the technical solution provided by this application will be used to obtain the image, which can ensure the three-dimensional sense and detail of the image taken in the dark light environment. Will not be missing, and can avoid waste of resources.
  • the method further includes: calibrating key points on the first 3D model; according to the first image and the first 3D model, Fusion and generating the third image of the shooting target includes: matching the key points between the first image and the first 3D model; and according to the matched first image and the first 3D model Fusion generates the third image.
  • the key point matching technology is combined to fuse the actually shot first image and the first 3D model into the actually needed image, and the obtained image has a richer stereoscopic effect and details, and a better user experience.
  • the present application provides an image generation device, the device includes: a first acquisition module for acquiring a first image of a shooting target and a first ambient light angle; the first image is the image of the shooting target A three-dimensional image; the first ambient light angle is used to indicate the relative positional relationship between the illumination light source in the shooting environment and the shooting target when the first image is taken; the second acquisition module is used to acquire the first 3D model
  • the first 3D model is a 3D model of the shooting target generated by fusion according to the depth information of the shooting target, a plurality of second images and the first lighting information;
  • the plurality of second images refer to A plurality of two-dimensional images obtained by shooting the shooting target from multiple angles of the shooting target;
  • the first lighting information includes a first lighting angle and a first light intensity, and the first lighting angle is equal to the first lighting angle.
  • the ambient light angle, the brightness corresponding to the first light intensity is greater than or equal to a preset brightness threshold; the fusion module is configured to fuse and generate a third image of the
  • the device of this implementation manner can obtain the first image of the shooting target and the first ambient light angle corresponding to the first image, and can obtain the depth information according to the shooting target, multiple second images, and the lighting source according to sufficient brightness.
  • the first 3D model is generated by fusion of the lighting information of the lighting of the shooting target by the first ambient light angle, and then a third image of the shooting target may be generated according to the first image and the first 3D model.
  • the user uses the device to take a selfie in a dark environment, he can use the depth information according to the shooting target, multiple second images, and a lighting source with sufficient brightness to target the shooting target from the first ambient light angle.
  • the 3D model generated by the lighting information of the lighting is fused with the actual three-dimensional image, so that the actual self-portrait image has a better three-dimensional sense and detail effect, without the phenomenon of three-dimensional sense and lack of skin details, and the user experience is better. good.
  • the second acquisition module is specifically configured to: acquire a preset second 3D model; the second 3D model is based on the depth information and The 3D model of the shooting target generated by fusion of the plurality of second images; acquiring the first lighting information; generating the first 3D model according to the fusion of the first lighting information and the second 3D model .
  • the device of this implementation manner can obtain the preset second 3D model and the first lighting information, and then generate the first 3D model according to the fusion of the first lighting information and the second 3D model, and use the device of this implementation manner to obtain
  • the first 3D model is more accurate, the three-dimensional sense and skin details of the images generated by subsequent shooting are more abundant, and the user experience is better.
  • the fusion module is further configured to: obtain the depth information; obtain the multiple second images; The two second images are fused to generate the second 3D model.
  • the device of this implementation mode first generates a second 3D model of the shooting target according to the depth information of the shooting target and the two-dimensional information of the shooting target obtained from multiple angles, and then according to the lighting information corresponding to the first ambient light angle and the second 3D model.
  • the 3D model generates the first 3D model, and the obtained first 3D model is more accurate, the three-dimensional sense and skin details of the images generated by subsequent shooting are more abundant, and the user experience is better.
  • the second acquisition module is specifically configured to: acquire the first 3D model from a plurality of preset third 3D models;
  • a third 3D model is a 3D model of a plurality of the shooting targets generated according to the depth information, the plurality of second images, and a plurality of second lighting information; each of the second lighting information includes one A different second illumination angle;
  • the first 3D model is a third 3D model corresponding to a second illumination angle that is the same as the first ambient light angle.
  • the device in this implementation mode can obtain the first 3D model from multiple preset third 3D models according to the angle matching method. Using the device in this implementation mode, the first 3D model can be obtained simply and quickly, thereby quickly Obtain three-dimensional images with richer skin details and better applicability.
  • the first acquiring module is specifically configured to: acquire the ambient light brightness of the shooting target; if the ambient light brightness is less than the preset brightness Threshold, acquiring the first image of the shooting target and the first ambient light angle.
  • the device of this implementation manner first obtains the ambient light brightness of the shooting target; then, when the ambient light brightness is less than the preset brightness threshold, obtains the first image of the shooting target and the first ambient light angle corresponding to the first image. That is to say, the device will only use the technical solution provided in this application to obtain images when the ambient light brightness is less than the preset brightness threshold, which can ensure that the three-dimensional sense and details of the image captured in the dark light environment will not be lost, but also Avoid waste of resources.
  • the device further includes: a calibration module, configured to calibrate key points on the first 3D model; and the fusion module is specifically configured to: The first image and the first 3D model are matched with the key points; the third image is generated by fusing the matched first image and the first 3D model.
  • the device of this implementation mode combines the key point matching technology to fuse the actually shot first image and the first 3D model into the actually needed image, and the obtained image has a richer stereoscopic effect and details, and a better user experience.
  • an embodiment of the present application provides a device including a processor, and when the processor executes a computer program or instruction in a memory, the method described in the first aspect is executed.
  • an embodiment of the present application provides a device, the device includes a processor and a memory, the memory is used to store computer programs or instructions; the processor is used to execute the computer programs or instructions stored in the memory, So that the device executes the corresponding method as shown in the first aspect.
  • an embodiment of the present application provides a device that includes a processor, a memory, and a transceiver; the transceiver is used for receiving signals or sending signals; the memory is used for storing computer programs or instructions; The processor is configured to call the computer program or instruction from the memory to execute the method described in the first aspect.
  • an embodiment of the present application provides a device that includes a processor and an interface circuit; the interface circuit is configured to receive a computer program or instruction and transmit it to the processor; the processor runs the Computer programs or instructions to perform the corresponding method as shown in the first aspect.
  • embodiments of the present application provide a computer storage medium, where the computer storage medium is used to store a computer program or instruction, and when the computer program or instruction is executed, the method described in the first aspect is implemented.
  • embodiments of the present application provide a computer program product including a computer program or instruction, which when the computer program or instruction is executed, enables the method described in the first aspect to be implemented.
  • this application provides an image generation method and device.
  • the first image of the shooting target and the first ambient light angle corresponding to the first image can be acquired, and the depth information according to the shooting target, a plurality of second images, and the lighting source with sufficient brightness can be acquired from the
  • the first 3D model is generated from the lighting information of the shooting target by the first ambient light angle, and then a third image of the shooting target may be generated according to the first image and the first 3D model.
  • the terminal device can use the depth information according to the shooting target, multiple second images, and a lighting source with sufficient brightness to view the camera from the first ambient light angle.
  • the 3D model generated by the lighting information of the shooting target is fused with the actual three-dimensional image, so that the actual self-portrait image has a better three-dimensional effect and detail effect, and does not produce the phenomenon of three-dimensional feeling and lack of skin details. The experience is better.
  • FIG. 1 is a schematic flowchart of an embodiment of the image generation method provided by this application.
  • FIG. 2 is a schematic flowchart of an implementation manner of the method for obtaining the first 3D model provided by this application;
  • FIG. 3 is a schematic flowchart of another implementation manner of the method for obtaining the first 3D model provided by this application;
  • FIG. 5 is a structural block diagram of an embodiment of the image generation device provided by this application.
  • FIG. 6 is a structural block diagram of an implementation manner of a chip provided by this application.
  • A/B can mean A or B.
  • “And/or” in this article is only an association relationship describing the associated objects, which means that there can be three kinds of relationships.
  • a and/or B can mean: A alone exists, A and B exist at the same time, and B exists alone. These three situations.
  • “at least one” means one or more, and “plurality” means two or more.
  • the words “first” and “second” do not limit the quantity and order of execution, and the words “first” and “second” do not limit the difference.
  • a front camera is usually provided on a terminal device, and the user can use the front camera to take a selfie.
  • the user can take a selfie on any part of the user's body, such as the face, neck, arms, and so on.
  • a fill light can be set inside the terminal device.
  • the terminal device detects that the ambient light brightness of the shooting environment is less than the preset brightness threshold, the fill light can be turned on to fill light , Or, you can light up the display screen of the terminal device to fill light.
  • the face can only receive partial light, and the light is still insufficient, and the resulting self-portrait image will still appear three-dimensional and The phenomenon of missing skin details.
  • a light supplement device can be installed outside the terminal device.
  • an external light supplement device can be installed on the terminal device.
  • you can manually turn on the external light-filling device to perform light-filling, thereby improving the shooting effect of the self-portrait image and eliminating the lack of three-dimensionality and skin details in the image.
  • users need to carry additional supplementary light equipment, which increases the user’s burden
  • users need to manually turn on or off the supplementary light equipment, which is inconvenient to use and poor user experience .
  • the terminal equipment may be user equipment (UE).
  • UE user equipment
  • the UE may be a mobile phone (mobile phone), a tablet computer (portable android device, Pad), a personal digital assistant (personal digital assistant, PDA), and the like.
  • At least one camera device may be provided on the terminal device, and the at least one camera device can be used to take three-dimensional images, and can also be used to take two-dimensional images.
  • the at least one camera device can be a depth camera or a structured light camera.
  • the at least one camera device may include a front camera, and the user can use the front camera to take a selfie.
  • the terminal device may also include more or fewer components.
  • the terminal device may also include a processor, a memory, a transceiver, a display screen, etc., which is not limited in this application.
  • FIG. 1 is a schematic flowchart of an embodiment of an image generation method provided by this application. The method includes the following steps:
  • Step S101 Acquire a first image of a shooting target and a first ambient light angle.
  • the shooting target refers to the person to be shot, a certain part of the human body, or an object, etc.
  • the shooting target may be a face, neck, arm, etc.
  • the image generation method provided in this application is not limited to the application scenario of self-portrait, and is also applicable to other shooting scenarios of any person or object, which is not limited in this application.
  • the shooting target is a human face as an example, the embodiment of the technical solution provided in the present application will be described in detail.
  • the first image is a three-dimensional image of the shooting target, which can be obtained by using the front camera of the terminal device when taking a self-portrait. Further, when the user takes a selfie, a two-dimensional image generated by the user's self-portrait can be obtained, and the depth information of the shooting target can be obtained at the same time, and then the first image is generated according to the two-dimensional image and the depth information.
  • the first ambient light angle is used to indicate the relative positional relationship between the illumination light source in the shooting environment and the shooting target when shooting the first image.
  • the structured light technology can be used to analyze the first image and obtain the first ambient light angle, that is, when the first image is captured by using structured light technology, the illumination light source in the shooting environment and the shooting target can be analyzed.
  • the following three-dimensional coordinate system can be established in the system of the terminal device: the front of the face is the positive direction of the Z axis, the directly above the face is the positive direction of the Y axis, and the connection between the two ears is the direction of the X axis, and The positive direction of the X axis points from the left ear to the right ear.
  • the first ambient light angle can be 30 degrees from the positive direction of the Y-axis to the positive direction of the X-axis, which means that the illumination light source is located in a direction that is 30 degrees away from the right ear on the right ear side of the face; the first ambient light angle It can also be deflection from the positive direction of the Y-axis to the positive direction of the X-axis by 30 degrees, and then to the positive direction of the Z-axis by 20 degrees. The front of the face is deflected in a direction of 20 degrees; and so on, the first ambient light angle can also be other content, which will not be listed here.
  • the ambient light brightness of the shooting target that is, the brightness of the illuminating light source in the shooting environment; then, when the ambient light brightness is less than the preset brightness threshold, obtain the first brightness of the shooting target.
  • An image and the first ambient light angle That is to say, only in the dark environment, the terminal device will obtain the first image of the shooting target and the first ambient light angle, and execute the subsequent steps of this solution, so that the non-existent stereo can be obtained more flexibly and efficiently. Images with missing sense and missing details provide a better user experience.
  • the preset brightness threshold can be set according to the requirements of actual application scenarios.
  • Step S102 Obtain a first 3D model.
  • the first 3D model refers to a 3D model of the shooting target generated by fusion based on the depth information of the shooting target, a plurality of second images, and the first lighting information.
  • FIG. 2 is a schematic flowchart of an implementation manner of the method for acquiring the first 3D model provided by this application.
  • the method may include the following steps:
  • Step S201 Acquire depth information of the shooting target.
  • the depth information of the shooting target refers to the visual depth information of the shooting target, for example, the height of the nose tip of the human face protruding the face, the height of the forehead protruding the face of the human face, and the lips of the mouth protruding the face of the human face. Height etc.
  • the terminal device may use a camera device provided in the terminal device, such as a front camera, to perform a three-dimensional scan of the shooting target to obtain depth information of the shooting target.
  • a camera device provided in the terminal device, such as a front camera, to perform a three-dimensional scan of the shooting target to obtain depth information of the shooting target.
  • Step S202 Acquire multiple second images of the shooting target.
  • the multiple second images refer to multiple two-dimensional images obtained by shooting the shooting target from multiple angles of the shooting target.
  • Shooting the shooting target from multiple angles of the shooting target to obtain multiple two-dimensional images refers to changing the relative position relationship between the camera device and the shooting target multiple times to shoot the shooting target, and obtaining a two-dimensional image after each shooting. Images, multiple two-dimensional images are obtained after multiple shots.
  • step S202 the implementation manners for acquiring multiple second images of the shooting target may include multiple, for example:
  • the shooting target may be photographed from multiple angles of the shooting target in advance, that is, the relative relationship between the camera device of the terminal device and the shooting target may be changed multiple times. Position, each time the relative position is changed, a second image is obtained, and after the relative position is changed multiple times, multiple second images are obtained, and then the obtained multiple second images are stored in the terminal device. Then, when step S202 is performed, Multiple pre-stored second images can be read directly from the terminal device. For example, you can pre-deflection from the front of the face, directly above, from the front of the face to the left of the face, and from the left side of the face.
  • the front of the face is deflected at any angle to the right side of the face, from the front of the face to the top of the face at any angle, and from the front of the face to the bottom of the face at any angle, etc., self-portrait Multiple two-dimensional images, and then store the multiple two-dimensional images of the human face as the second image in the terminal device.
  • step S202 the multiple two-dimensional images of the human face stored in advance can be directly read from the terminal device. Dimensional image.
  • step S202 when step S202 is performed, a prompt message is first output, and the prompt message is used to prompt the user to use the camera device of the terminal device to shoot the shooting target from multiple angles of the shooting target to obtain two-dimensional images of multiple shooting targets. Images; then, multiple two-dimensional images obtained by the user using the camera device of the terminal device to photograph the shooting target from multiple angles of the shooting target are determined as second images, thereby obtaining a plurality of second images.
  • the prompt information may be text information, or voice information, etc., which is not limited in this application.
  • a prompt message is first output to prompt the user to use the camera device of the terminal device to capture
  • the shooting target is photographed from multiple angles of the target, and a plurality of second images of the shooting target are obtained.
  • the plurality of second images obtained for the first time can be stored in the terminal device.
  • the plurality of pre-stored images can be directly read from the terminal device. The second image is fine.
  • Step S203 Generate a second 3D model according to the fusion of the depth information and the multiple second images.
  • the terminal device may use fusion processing technology to generate a second 3D model of the shooting target according to the fusion of the depth information and the multiple second images.
  • Step S204 Acquire first lighting information.
  • the first lighting information refers to information that the lighting source lights the target to be illuminated from the first lighting angle.
  • the lighting light source may be a physical light source or a virtual light source set according to 3D lighting technology.
  • the target to be lighted may be a shooting target, such as a human face, or a virtual target, such as the second 3D model.
  • the first lighting information may include a first lighting angle, a first light intensity, and may also include a first color temperature and the like.
  • the first illumination angle is equal to the first ambient light angle.
  • the brightness corresponding to the first light intensity is greater than or equal to the preset brightness threshold, which can make up for the defect of insufficient brightness of the illuminating light source in a dark light environment, and avoid the phenomenon of three-dimensional feeling and lack of detail in the captured image.
  • the first color temperature is the color temperature of the illuminating light source, and may be equal to the color temperature of the illuminating light source in the shooting environment.
  • a 3D lighting technology may be used in advance based on multiple second illumination angles, and the second 3D lighting technology may be calculated from each second illumination angle.
  • the model simulates the second lighting information of lighting to obtain multiple second lighting information.
  • Each second lighting information corresponds to a different second lighting angle, and then each second lighting information is associated with its corresponding first lighting information.
  • the two illumination angles are correspondingly stored in the terminal device.
  • the second lighting information corresponding to the second lighting angle that is the same as the first ambient light angle may be read from the plurality of pre-stored second lighting information, and the second lighting information may be determined Light up the information for the first.
  • the first ambient light angle may be determined as the first lighting angle
  • 3D lighting technology is used to calculate the simulated lighting of the second 3D model from the first lighting angle.
  • the lighting information is determined as the first lighting information.
  • the user may be prompted to use an external lighting source to light the shooting target from multiple second lighting angles in advance, and use the camera device to obtain the light from each The second lighting information of the second lighting angle lighting to obtain multiple second lighting information, each second lighting information corresponds to a different second lighting angle, and then each second lighting information and its corresponding The second illumination angle is correspondingly stored in the terminal device.
  • the second lighting information corresponding to the second lighting angle that is the same as the first ambient light angle may be read from the plurality of pre-stored second lighting information, and the second lighting information may be determined Light up the information for the first.
  • the brightness of the external lighting source is greater than or equal to the preset brightness threshold, which can compensate for the lack of brightness of the illumination source in a dark light environment, and avoid the phenomenon of three-dimensionality and lack of detail in the captured image.
  • Step S205 Generate a first 3D model according to the fusion of the first lighting information and the second 3D model.
  • 3D lighting technology can be combined to generate a first 3D model by fusion according to the first lighting information and the second 3D model.
  • the second 3D model may be generated in advance according to the method shown in step S201 to step S203 in the embodiment shown in FIG. 2, and Store the second 3D model in the terminal device, and then every time the first 3D model is acquired according to the method shown in Figure 2 of this application, steps S201 to S203 are no longer executed, but directly read from the terminal device.
  • the second 3D model stored in advance is taken, so that the first 3D model can be obtained more quickly and conveniently, and the user experience is better.
  • the second 3D model may be generated according to the method shown in step S201 to step S203 in the embodiment shown in FIG.
  • the second 3D model is stored in the terminal device. Afterwards, each time the first 3D model is acquired according to the method shown in Figure 2 of this application, steps S201 to S203 are no longer executed, and all the stored data are directly read from the terminal device. The second 3D model.
  • FIG. 3 is a schematic flowchart of another embodiment of the method for obtaining the first 3D model provided by this application.
  • the method may include the following steps:
  • Step S301 Acquire depth information of the shooting target.
  • Step S302 Acquire multiple second images of the shooting target.
  • Step S303 Generate a second 3D model according to the fusion of the depth information and the multiple second images.
  • step S301 to step S303 reference may be made to the implementation manner of step S201 to step S203 in the embodiment shown in FIG. 2, which will not be repeated here.
  • Step S304 Obtain a plurality of second lighting information.
  • the multiple second lighting information refers to multiple lighting information obtained by the lighting source lighting the target to be lighted from multiple second lighting angles, and each second lighting information corresponds to a different The second illumination angle.
  • Each second lighting information may include a second illumination angle, a second light intensity, and a second color temperature, where the brightness corresponding to the second light intensity is greater than or equal to the preset brightness threshold, which can compensate for the brightness of the illuminating light source in a dark light environment Insufficient defects, to avoid the phenomenon of three-dimensionality and lack of detail in the captured images, and the second color temperature is equal to the color temperature of the illuminating light source in the shooting environment.
  • a 3D lighting technology may be used based on multiple second illumination angles in advance to calculate the effect of each second illumination angle on the second 3D
  • the model simulates the second lighting information of lighting, and obtains multiple second lighting information.
  • Each second lighting information corresponds to a second lighting angle, and then each second lighting information is assigned to its corresponding second lighting angle.
  • the user may be prompted to use an external lighting source to light the shooting target from multiple second lighting angles in advance, and use the camera device to obtain the light from each The lighting information of the second lighting angle lighting, and then the lighting information is stored in the terminal device as the second lighting information corresponding to the corresponding second lighting angle.
  • step S304 the plurality of pre-stored second lighting information can be directly read from the terminal device.
  • a 3D lighting technology may be used based on multiple second lighting angles to calculate a second lighting that simulates lighting of the second 3D model from each second lighting angle. Information, get multiple second lighting information.
  • Step S305 Generate a plurality of third 3D models according to the fusion of the plurality of second lighting information and the second 3D model.
  • 3D lighting technology can be combined to generate a third 3D model by fusion according to each second lighting information and the second 3D model. Based on each second lighting information corresponding to a different second lighting angle, each third 3D model also corresponds to a different second lighting angle.
  • Step S306 Obtain a first 3D model from the plurality of third 3D models.
  • a third 3D model corresponding to a second illumination angle that is the same as the first ambient light angle can be selected from the obtained plurality of third 3D models, and the third 3D model is determined as The first 3D model.
  • the second 3D model may be generated in advance according to the method shown in step S301 to step S303 in the embodiment shown in FIG. 3, and
  • the second 3D model is stored in the terminal device, and then every time the first 3D model is acquired according to the method shown in FIG. 3 of this application, steps S301 to S303 are no longer executed, but directly read from the terminal device.
  • the second 3D model stored in advance is taken, so that the first 3D model can be obtained more quickly and conveniently, and the user experience is better.
  • the second 3D model may be generated according to the method shown in step S301 to step S303 in the embodiment shown in FIG. 3, and then the The second 3D model is stored in the terminal device. Afterwards, each time the first 3D model is acquired according to the method shown in FIG. 3 of this application, steps S301 to S303 are no longer executed, and the stored data is directly read from the terminal device. The second 3D model.
  • the multiple third 3D models may be generated in advance according to the method shown in step S301 to step S305 in the embodiment shown in FIG. 3, and Each of the third 3D models and their corresponding second illumination angles are correspondingly stored in the terminal device, and then each time the first 3D model is acquired according to the method shown in FIG. 3 of this application, the steps S301 to S301 are not executed any more.
  • the multiple pre-stored third 3D models are directly read from the terminal device. In this way, the first 3D model can be obtained more quickly and conveniently, and the user experience is better.
  • the multiple third 3D models may be generated according to the method shown in step S301 to step S305 in the embodiment shown in FIG.
  • Each of the third 3D models and their corresponding second illumination angles are stored in the terminal device correspondingly, and then each time the first 3D model is acquired according to the method shown in FIG. 3 of this application, steps S301 to S305 are not executed any more. Instead, the stored third 3D models are directly read from the terminal device.
  • FIG. 4 is a schematic flowchart of another embodiment of the method for obtaining the first 3D model provided by this application.
  • the method may include the following steps:
  • Step S401 Acquire depth information of the shooting target.
  • Step S402 Acquire multiple second images of the shooting target.
  • step S401 to step S402 reference may be made to the implementation manner of step S201 to step S202 in the embodiment shown in FIG. 2, which will not be repeated here.
  • Step S403 Acquire first lighting information.
  • step S403 For the implementation of step S403, reference may be made to the implementation of step S204 in the embodiment shown in FIG. 2, which will not be repeated here.
  • Step S404 According to the depth information, the multiple second images, and the first lighting information, a first 3D model is generated by fusion.
  • 3D lighting technology and fusion processing technology can be combined, based on the depth information, the multiple second images, and the first lighting information. Lighting information is fused to generate the first 3D model.
  • Step S103 According to the first image and the first 3D model, a third image of the shooting target is generated by fusion.
  • the first image and the first 3D model may be combined to generate a third image of the shooting target.
  • the third image is a three-dimensional image, and the two-dimensional image corresponding to the third image can be used as a captured image actually used by the user, and the two-dimensional image corresponding to the third image can be displayed on the display screen of the terminal device.
  • the method shown in Figure 1 of this application may not be used to obtain a two-dimensional image, and the two-dimensional image captured by the user may be directly displayed on the display.
  • the method shown in FIG. 1 of this application can also be used to first generate a third image, and then display the two-dimensional image corresponding to the third image on the display screen, which is not limited in this application.
  • key points can be calibrated on the first 3D model, the second 3D model, and the third 3D model of the shooting target.
  • key points such as eyes, nose, and mouth can be calibrated on the face model.
  • key points can also be calibrated on the first image.
  • the first image and the first 3D model can be matched with key points, and then the first image and the first 3D model can be matched according to the matched first image.
  • a 3D model is fused to generate a third image of the shooting target.
  • the image generation method provided by the embodiments of the present application can obtain the first image of the shooting target and the first ambient light angle corresponding to the first image, and can obtain depth information according to the shooting target, multiple second images, and sufficient brightness according to the brightness.
  • the first 3D model is generated by the lighting information of the lighting source lighting the shooting target from the first ambient light angle, and then a third image of the shooting target may be generated according to the first image and the first 3D model.
  • the terminal device can use the depth information according to the shooting target, multiple second images, and a lighting source with sufficient brightness to view the camera from the first ambient light angle.
  • the 3D model generated by the lighting information of the shooting target is fused with the actual three-dimensional image, so that the actual self-portrait image has a better three-dimensional effect and detail effect, and does not produce the phenomenon of three-dimensional feeling and lack of skin details. The experience is better.
  • the methods and operations implemented by the terminal device may also be implemented by components (for example, a chip or a circuit) that can be used for the terminal device.
  • each network element such as a terminal device
  • each network element includes a hardware structure or software module corresponding to each function, or a combination of the two.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software-driven hardware depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
  • the embodiment of the present application may divide the terminal device into functional modules according to the foregoing method examples.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules. It should be noted that the division of modules in the embodiments of the present application is illustrative, and is only a logical function division, and there may be other division methods in actual implementation. The following is an example of dividing each function module corresponding to each function as an example.
  • FIG. 5 is a structural block diagram of an embodiment of the image generating apparatus provided by this application.
  • the device 500 may include: a first acquisition module 501, a second acquisition module 502, and a fusion module 503.
  • the apparatus 500 can be used to perform the actions performed by the terminal device in the above method embodiments.
  • the first acquisition module 501 is used to acquire a first image of a shooting target and a first ambient light angle; the first image is a three-dimensional image of the shooting target; the first ambient light angle is used to indicate the shooting location The relative positional relationship between the illumination light source in the shooting environment and the shooting target in the first image;
  • the second acquisition module 502 is configured to acquire a first 3D model;
  • the first 3D model is a combination of the depth information of the shooting target, a plurality of second images, and the first lighting information.
  • the multiple second images refer to multiple two-dimensional images obtained by shooting the shooting target from multiple angles of the shooting target;
  • the first lighting information includes a first illumination angle and a first light Strong, the first light angle is equal to the first ambient light angle, and the brightness corresponding to the first light intensity is greater than or equal to a preset brightness threshold;
  • the fusion module 503 is configured to merge and generate a third image of the shooting target according to the first image and the first 3D model.
  • the second acquisition module 502 is specifically configured to: acquire a preset second 3D model; the second 3D model is the shooting generated by fusion of the depth information and the plurality of second images A 3D model of the target; acquiring the first lighting information; fusing the first lighting information and the second 3D model to generate the first 3D model.
  • the fusion module 503 is further configured to: obtain the depth information; obtain the multiple second images; generate the second 3D model according to the fusion of the depth information and the multiple second images.
  • the second obtaining module 502 is specifically configured to: obtain the first 3D model from a plurality of preset third 3D models; the plurality of third 3D models are based on the depth information, A plurality of 3D models of the shooting target generated by the plurality of second images and a plurality of second lighting information; each of the second lighting information includes a different second lighting angle; the first 3D model It is a third 3D model corresponding to the second illumination angle that is the same as the first ambient light angle.
  • the first acquiring module 501 is specifically configured to: acquire the ambient light brightness of the shooting target; if the ambient light brightness is less than the preset brightness threshold, acquire the first image and the first image of the shooting target An ambient light angle.
  • the device 500 may further include: a calibration module, configured to calibrate key points on the first 3D model; and the fusion module 503 is specifically configured to: compare the first image with the first 3D model.
  • the model performs the matching of the key points; the third image is generated according to the fusion of the matched first image and the first 3D model.
  • the apparatus 500 can implement the steps or processes executed by the terminal device in the method shown in FIG. 1, FIG. 2, FIG. 3, or FIG. 4 according to an embodiment of the present application. 1. Modules of the method executed by the terminal device in the method shown in Figure 2, Figure 3 or Figure 4. In addition, each module in the device 500 and other operations and/or functions described above are used to implement the corresponding steps of the method shown in FIG. 1, FIG. 2, FIG. 3, or FIG. 4, respectively.
  • the first acquisition module 501 in the device 500 may be used to execute step S101 in the method shown in FIG. 1, and the second acquisition module 502 may be used to execute steps in the method shown in FIG. S102.
  • the fusion module 503 may be used to execute step S103 in the method shown in FIG. 1.
  • the second acquisition module 502 in the device 500 may also be used to execute step S201 to step S205 in the method shown in FIG. 2.
  • the second acquisition module 502 in the device 500 may also be used to execute steps S301 to S306 in the method shown in FIG. 3.
  • the second acquisition module 502 in the device 500 may also be used to execute steps S401 to S404 in the method shown in FIG. 4.
  • the apparatus 500 may be a terminal device, and the terminal device may perform the functions of the terminal device in the foregoing method embodiment, or implement the steps or processes performed by the terminal device in the foregoing method embodiment.
  • the terminal device may include a processor and a transceiver.
  • the terminal device may also include a memory.
  • the processor, transceiver, and memory can communicate with each other through internal connection paths to transfer control and/or data signals.
  • the memory is used to store computer programs or instructions, and the processor is used to call and run the computer from the memory. Programs or instructions to control the transceiver to receive signals and/or send signals.
  • the terminal device may also include an antenna, which is used to transmit the uplink data or uplink control signaling output by the transceiver through a wireless signal.
  • the foregoing processor may be combined with a memory to form a processing device, and the processor is configured to execute a computer program or instruction stored in the memory to realize the foregoing functions.
  • the memory may also be integrated in the processor or independent of the processor.
  • the processor may correspond to the fusion module in FIG. 5.
  • the above transceiver may also be referred to as a transceiver unit.
  • the transceiver may include a receiver (or called a receiver, a receiving circuit) and/or a transmitter (or called a transmitter, a transmitting circuit). Among them, the receiver is used to receive signals, and the transmitter is used to send signals.
  • the foregoing terminal device can implement various processes involving the terminal device in the method embodiments shown above.
  • the operation and/or function of each module in the terminal device is to implement the corresponding process in the foregoing method embodiment.
  • the foregoing terminal device may also include a power source, which is used to provide power to various devices or circuits in the terminal device.
  • the terminal device may also include one or more of an input unit, a display unit, an audio circuit, a camera, and a sensor.
  • the audio circuit may also include a speaker, Microphone etc.
  • the embodiment of the present application also provides a processing device, including a processor and an interface.
  • the processor may be used to execute the method in the foregoing method embodiment.
  • the aforementioned processing device may be a chip.
  • FIG. 6, is a structural block diagram of an embodiment of the chip provided by this application.
  • the chip shown in Figure 6 may be a general-purpose processor or a dedicated processor.
  • the chip 600 includes a processor 601.
  • the processor 601 may be used to support the device shown in FIG. 5 to execute the technical solutions shown in FIG. 1, FIG. 2, FIG. 3, or FIG. 4.
  • the chip 600 may also include a transceiver 602.
  • the transceiver 602 is used to accept the control of the processor 601 and is used to support the device shown in FIG. Technical solutions.
  • the chip 600 shown in FIG. 6 may further include: a storage medium 603.
  • the chip shown in Figure 6 can be implemented using the following circuits or devices: one or more field programmable gate arrays (FPGA), programmable logic devices (PLD) , Application specific integrated circuit (ASIC), system on chip (SoC), central processor unit (CPU), network processor (NP), digital signal processing circuit (digital signal processor, DSP), microcontroller (microcontroller unit, MCU), controller, state machine, gate logic, discrete hardware components, any other suitable circuits, or capable of performing various functions described throughout this application Any combination of circuits.
  • FPGA field programmable gate arrays
  • PLD programmable logic devices
  • ASIC Application specific integrated circuit
  • SoC system on chip
  • CPU central processor unit
  • NP network processor
  • DSP digital signal processing circuit
  • microcontroller microcontroller unit, MCU
  • controller state machine, gate logic, discrete hardware components, any other suitable circuits, or capable of performing various functions described throughout this application Any combination of circuits.
  • each step of the above method can be completed by an integrated logic circuit of hardware in the processor or instructions in the form of software.
  • the steps of the method disclosed in combination with the embodiments of the present application may be directly embodied as being executed and completed by a hardware processor, or executed and completed by a combination of hardware and software modules in the processor.
  • the software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware. To avoid repetition, it will not be described in detail here.
  • the processor in the embodiment of the present application may be an integrated circuit chip with signal processing capability.
  • the steps of the foregoing method embodiments can be completed by hardware integrated logic circuits in the processor or instructions in the form of software.
  • the above-mentioned processor may be a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components .
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
  • the memory in the embodiments of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory can be read-only memory (ROM), programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), and electrically available Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • the volatile memory may be random access memory (RAM), which is used as an external cache.
  • RAM random access memory
  • static random access memory static random access memory
  • dynamic RAM dynamic RAM
  • DRAM dynamic random access memory
  • synchronous dynamic random access memory synchronous DRAM, SDRAM
  • double data rate synchronous dynamic random access memory double data rate SDRAM, DDR SDRAM
  • enhanced synchronous dynamic random access memory enhanced SDRAM, ESDRAM
  • synchronous connection dynamic random access memory serial DRAM, SLDRAM
  • direct rambus RAM direct rambus RAM
  • the embodiments of the present application also provide a computer program product.
  • the computer program product includes: a computer program or instruction.
  • the computer program or instruction runs on a computer, the computer executes FIG. 1 , The method of any one of the embodiments shown in FIG. 2, FIG. 3, or FIG. 4.
  • the embodiment of the present application also provides a computer storage medium that stores a computer program or instruction.
  • the computer program or instruction runs on a computer, the computer executes FIG. 1 , The method of any one of the embodiments shown in FIG. 2, FIG. 3, or FIG. 4.
  • the computer program product includes one or more computer programs or instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer program or instruction may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer program or instruction may be downloaded from a website, computer,
  • the server or data center transmits to another website, computer, server, or data center through wired (such as coaxial cable, optical fiber, digital subscriber line (digital subscriber line, DSL)) or wireless (such as infrared, wireless, microwave, etc.) transmission.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a high-density digital video disc (digital video disc, DVD)), or a semiconductor medium (for example, a solid state disk (solid state disc, SSD)) etc.
  • a magnetic medium for example, a floppy disk, a hard disk, and a magnetic tape
  • an optical medium for example, a high-density digital video disc (digital video disc, DVD)
  • a semiconductor medium for example, a solid state disk (solid state disc, SSD)
  • component used in this specification are used to denote computer-related entities, hardware, firmware, a combination of hardware and software, software, or software in execution.
  • the component may be, but is not limited to, a process, a processor, an object, an executable file, an execution thread, a program, and/or a computer running on a processor.
  • the application running on the computing device and the computing device can be components.
  • One or more components may reside in processes and/or threads of execution, and components may be located on one computer and/or distributed between two or more computers.
  • these components can be executed from various computer readable media having various data structures stored thereon.
  • the component can be based on, for example, a signal having one or more data packets (e.g. data from two components interacting with another component in a local system, a distributed system, and/or a network, such as the Internet that interacts with other systems through a signal) Communicate through local and/or remote processes.
  • a signal having one or more data packets (e.g. data from two components interacting with another component in a local system, a distributed system, and/or a network, such as the Internet that interacts with other systems through a signal) Communicate through local and/or remote processes.
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the modules is only a logical function division, and there may be other divisions in actual implementation, for example, multiple modules or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional modules in the various embodiments of the present application may be integrated into one processing unit, or each module may exist alone physically, or two or more modules may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disks or optical disks and other media that can store program codes. .
  • the image generation device, terminal equipment, computer storage medium, computer program product, and chip provided in the above embodiments of the present application are all used to execute the methods provided above. Therefore, the beneficial effects that can be achieved can refer to the above provided The corresponding beneficial effects of the method will not be repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)
  • Studio Devices (AREA)

Abstract

本申请公开了一种图像生成方法及装置。该方法包括:获取拍摄目标的第一图像和用于指示拍摄第一图像时照射光源与拍摄目标相对位置关系的第一环境光角度;获取根据拍摄目标的深度信息、多个第二图像和第一打光信息融合生成的第一3D模型;多个第二图像为从拍摄目标的多个角度拍摄得到的多个二维图像;第一打光信息包括与第一环境光角度相同的第一光照角度和对应亮度大于等于预设亮度阈值的第一光强;根据第一图像和第一3D模型,融合生成第三图像。采用该方法,用户在暗光环境自拍时,可以使用包含第一打光信息的3D模型,与实际拍摄的三维图像融合,得到不会产生立体感和皮肤细节缺失的自拍图像,用户体验更好。

Description

图像生成方法及装置
本申请要求于2020年4月30日提交中国专利局、申请号为202010364283.3、发明名称为“图像生成方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端设备技术领域,尤其涉及一种图像生成方法及装置。
背景技术
随着终端设备技术的不断发展,终端设备上可以设置多个摄像装置,多个摄像装置可以包括前置摄像头,用户可以利用该前置摄像头进行自拍。不过,当处于暗光环境时,由于光线不足,自拍得到的图像的立体感和皮肤细节通常会缺失,噪点较多,用户体验较差。
目前,为了避免光线不足影响自拍效果,通常会通过终端设备的显示屏或内置补光灯进行补光,具体的,在用户自拍时,终端设备可以检测拍摄环境的环境光亮度,当环境光亮度小于设定的亮度阈值时,可以通过点亮终端设备的显示屏或开启内置补光灯进行补光,从而提升图像的拍摄效果。但是,通过显示屏或内置补光灯进行补光,人脸只能局部受光,且亮度有限,自拍得到的图像立体感和皮肤细节仍然会缺失,自拍效果仍然较差。此外,还可以通过外接补光设备的方式进行补光,此种方式下,自拍效果较好,但是需要用户自行设置补光设备,携带不便,用户体验较差。基于此,如何简单高效的消除光线不足对自拍图像中皮肤细节和立体感的影响,成为本领域技术人员亟待解决的技术问题。
发明内容
本申请提供了一种图像生成方法及装置,以解决如何简单高效的消除光线不足对自拍图像中皮肤细节和立体感影响的问题。
第一方面,本申请提供了一种图像生成方法,该方法包括:获取拍摄目标的第一图像和第一环境光角度;所述第一图像为所述拍摄目标的三维图像;所述第一环境光角度用于指示拍摄所述第一图像时拍摄环境中的照射光源与所述拍摄目标之间的相对位置关系;获取第一3D模型;所述第一3D模型为根据所述拍摄目标的深度信息、多个第二图像和第一打光信息,融合生成的所述拍摄目标的3D模型;所述多个第二图像是指从所述拍摄目标的多个角度拍摄所述拍摄目标得到的多个二维图像;所述第一打光信息包括第一光照角度和第一光强,所述第一光照角度等于所述第一环境光角度,所述第一光强对应的亮度大于等于预设亮度阈值;根据所述第一图像和所述第一3D模型,融合生成所述拍摄目标的第三图像。
采用本实现方式的技术方案,可以获取拍摄目标的第一图像和第一图像对应的第一环境光角度,并且可以获取根据拍摄目标的深度信息、多个第二图像和根据亮度足够的打光光源从第一环境光角度对所述拍摄目标打光的打光信息融合生成的第一3D模型,然后可以根据第一图像和第一3D模型生成拍摄目标的第三图像。也就是说,采用该技术方案, 用户使用终端设备在暗光的环境中自拍时,终端设备可以使用根据拍摄目标的深度信息、多个第二图像和根据亮度足够的打光光源从第一环境光角度对所述拍摄目标打光的打光信息生成的3D模型,与实际拍摄的三维图像进行融合,从而使得实际得到的自拍图像立体感和细节效果更好,不会产生立体感和皮肤细节缺失的现象,用户体验更好。
结合第一方面,在第一方面第一种可能的实现方式中,获取第一3D模型,包括:获取预置的第二3D模型;所述第二3D模型为根据所述深度信息和所述多个第二图像融合生成的所述拍摄目标的3D模型;获取所述第一打光信息;根据所述第一打光信息和所述第二3D模型融合生成所述第一3D模型。
本实现方式中,可以获取预置的第二3D模型和第一打光信息,然后根据第一打光信息和第二3D模型融合生成第一3D模型,得到的第一3D模型更加准确,后续拍摄生成的图像的立体感和皮肤细节更加丰富,用户体验更好。
结合第一方面,在第一方面第二种可能的实现方式中,所述方法还包括:获取所述深度信息;获取所述多个第二图像;根据所述深度信息和所述多个第二图像融合生成所述第二3D模型。
本实现方式中,首先根据拍摄目标的深度信息和多个角度获得的拍摄目标的二维信息生成拍摄目标的第二3D模型,然后根据与第一环境光角度对应的打光信息和第二3D模型生成第一3D模型,得到的第一3D模型更加准确,后续拍摄生成的图像的立体感和皮肤细节更加丰富,用户体验更好。
结合第一方面,在第一方面第三种可能的实现方式中,所述获取第一3D模型,包括:从预置的多个第三3D模型中获取所述第一3D模型;所述多个第三3D模型为根据所述深度信息、所述多个第二图像和多个第二打光信息生成的多个所述拍摄目标的3D模型;每一个所述第二打光信息包含一个不同的第二光照角度;所述第一3D模型为与所述第一环境光角度相同的第二光照角度对应的第三3D模型。
本实现方式中,可以根据角度匹配的方式,从预置的多个第三3D模型中获取第一3D模型,使用本实现方式的方法,可以简单快速地获取到第一3D模型,从而快速获得立体感和皮肤细节更加丰富的图像,适用性较好。
结合第一方面,在第一方面第四种可能的实现方式中,获取拍摄目标的第一图像和第一环境光角度,包括:获取所述拍摄目标的环境光亮度;如果所述环境光亮度小于所述预设亮度阈值,获取所述拍摄目标的第一图像和第一环境光角度。
本实现方式中,首先获取拍摄目标的环境光亮度;然后当所述环境光亮度小于预设亮度阈值时,获取拍摄目标的第一图像和第一环境光角度。也就是说,本实现方式的技术方案中,只有当环境光亮度小于预设亮度阈值时,才会使用本申请提供的技术方案获取图像,既可以保证暗光环境中拍摄图像的立体感和细节不会缺失,又可以避免造成资源浪费。
结合第一方面,在第一方面第五种可能的实现方式中,所述方法还包括:在所述第一3D模型上标定关键点;根据所述第一图像和所述第一3D模型,融合生成所述拍摄目标的第三图像,包括:对所述第一图像与所述第一3D模型进行所述关键点的匹配;根据匹配后的所述第一图像与所述第一3D模型融合生成所述第三图像。
本实现方式中,结合关键点匹配技术,将实际拍摄的第一图像和第一3D模型融合为实际需要的图像,得到的图像的立体感和细节更加丰富,用户体验更好。
第二方面,本申请提供了一种图像生成装置,该装置包括:第一获取模块,用于获取拍摄目标的第一图像和第一环境光角度;所述第一图像为所述拍摄目标的三维图像;所述第一环境光角度用于指示拍摄所述第一图像时拍摄环境中的照射光源与所述拍摄目标之间的相对位置关系;第二获取模块,用于获取第一3D模型;所述第一3D模型为根据所述拍摄目标的深度信息、多个第二图像和第一打光信息,融合生成的所述拍摄目标的3D模型;所述多个第二图像是指从所述拍摄目标的多个角度拍摄所述拍摄目标得到的多个二维图像;所述第一打光信息包括第一光照角度和第一光强,所述第一光照角度等于所述第一环境光角度,所述第一光强对应的亮度大于等于预设亮度阈值;融合模块,用于根据所述第一图像和所述第一3D模型,融合生成所述拍摄目标的第三图像。
本实现方式的装置,可以获取拍摄目标的第一图像和第一图像对应的第一环境光角度,并且可以获取根据拍摄目标的深度信息、多个第二图像和根据亮度足够的打光光源从第一环境光角度对所述拍摄目标打光的打光信息融合生成的第一3D模型,然后可以根据第一图像和第一3D模型生成拍摄目标的第三图像。也就是说,用户使用该装置在暗光的环境中自拍时,可以使用根据拍摄目标的深度信息、多个第二图像和根据亮度足够的打光光源从第一环境光角度对所述拍摄目标打光的打光信息生成的3D模型,与实际拍摄的三维图像进行融合,从而使得实际得到的自拍图像立体感和细节效果更好,不会产生立体感和皮肤细节缺失的现象,用户体验更好。
结合第二方面,在第二方面第一种可能的实现方式中,所述第二获取模块具体用于:获取预置的第二3D模型;所述第二3D模型为根据所述深度信息和所述多个第二图像融合生成的所述拍摄目标的3D模型;获取所述第一打光信息;根据所述第一打光信息和所述第二3D模型融合生成所述第一3D模型。
本实现方式的装置,可以获取预置的第二3D模型和第一打光信息,然后根据第一打光信息和第二3D模型融合生成第一3D模型,使用本实现方式的装置,得到的第一3D模型更加准确,后续拍摄生成的图像的立体感和皮肤细节更加丰富,用户体验更好。
结合第二方面,在第二方面第二种可能的实现方式中,所述融合模块还用于:获取所述深度信息;获取所述多个第二图像;根据所述深度信息和所述多个第二图像融合生成所述第二3D模型。
本实现方式的装置,首先根据拍摄目标的深度信息和多个角度获得的拍摄目标的二维信息生成拍摄目标的第二3D模型,然后根据与第一环境光角度对应的打光信息和第二3D模型生成第一3D模型,得到的第一3D模型更加准确,后续拍摄生成的图像的立体感和皮肤细节更加丰富,用户体验更好。
结合第二方面,在第二方面第三种可能的实现方式中,所述第二获取模块具体用于:从预置的多个第三3D模型中获取所述第一3D模型;所述多个第三3D模型为根据所述深度信息、所述多个第二图像和多个第二打光信息生成的多个所述拍摄目标的3D模型;每一个所述第二打光信息包含一个不同的第二光照角度;所述第一3D模型为与所述第一环境光角度相同的第二光照角度对应的第三3D模型。
本实现方式的装置,可以根据角度匹配的方式,从预置的多个第三3D模型中获取第一3D模型,使用本实现方式的装置,可以简单快速地获取到第一3D模型,从而快速获得立体感和皮肤细节更加丰富的图像,适用性较好。
结合第二方面,在第二方面第四种可能的实现方式中,所述第一获取模块具体用于:获取所述拍摄目标的环境光亮度;如果所述环境光亮度小于所述预设亮度阈值,获取所述拍摄目标的第一图像和第一环境光角度。
本实现方式的装置,首先获取拍摄目标的环境光亮度;然后当所述环境光亮度小于预设亮度阈值时,获取拍摄目标的第一图像和所述第一图像对应的第一环境光角度。也就是说,该装置只有当环境光亮度小于预设亮度阈值时,才会使用本申请提供的技术方案获取图像,既可以保证暗光环境中拍摄图像的立体感和细节不会缺失,又可以避免造成资源浪费。
结合第二方面,在第二方面第五种可能的实现方式中,所述装置还包括:标定模块,用于在所述第一3D模型上标定关键点;所述融合模块具体用于:对所述第一图像与所述第一3D模型进行所述关键点的匹配;根据匹配后的所述第一图像与所述第一3D模型融合生成所述第三图像。
本实现方式的装置,结合关键点匹配技术,将实际拍摄的第一图像和第一3D模型融合为实际需要的图像,得到的图像的立体感和细节更加丰富,用户体验更好。
第三方面,本申请实施例提供一种装置,所述装置包括处理器,当所述处理器执行存储器中的计算机程序或指令时,如第一方面所述的方法被执行。
第四方面,本申请实施例提供一种装置,所述装置包括处理器和存储器,所述存储器用于存储计算机程序或指令;所述处理器用于执行所述存储器所存储的计算机程序或指令,以使所述装置执行如第一方面中所示的相应的方法。
第五方面,本申请实施例提供一种装置,所述装置包括处理器、存储器和收发器;所述收发器,用于接收信号或者发送信号;所述存储器,用于存储计算机程序或指令;所述处理器,用于从所述存储器调用所述计算机程序或指令执行如第一方面所述的方法。
第六方面,本申请实施例提供一种装置,所述装置包括处理器和接口电路;所述接口电路,用于接收计算机程序或指令并传输至所述处理器;所述处理器运行所述计算机程序或指令以执行如第一方面所示的相应的方法。
第七方面,本申请实施例提供一种计算机存储介质,所述计算机存储介质用于存储计算机程序或指令,当所述计算机程序或指令被执行时,使得第一方面所述的方法被实现。
第八方面,本申请实施例提供一种包括计算机程序或指令的计算机程序产品,当所述计算机程序或指令被执行时,使得第一方面所述的方法被实现。
为解决如何简单高效的消除光线不足对自拍图像中皮肤细节和立体感影响的问题,本申请提供了一种图像生成方法及装置。该方法中,可以获取拍摄目标的第一图像和第一图像对应的第一环境光角度,并且可以获取根据拍摄目标的深度信息、多个第二图像和根据亮度足够的打光光源从所述第一环境光角度对所述拍摄目标打光的打光信息生成的第一3D模型,然后可以根据第一图像和第一3D模型生成拍摄目标的第三图像。采用该方法,用户使用终端设备在暗光的环境中自拍时,终端设备可以使用根据拍摄目标的深度信息、多个第二图像和根据亮度足够的打光光源从第一环境光角度对所述拍摄目标打光的打光信息生成的3D模型,与实际拍摄的三维图像进行融合,从而使得实际得到的自拍图像立体感和细节效果更好,不会产生立体感和皮肤细节缺失的现象,用户体验更好。
附图说明
图1为本申请提供的图像生成方法的一种实施方式的流程示意图;
图2为本申请提供的获取第一3D模型的方法的一种实施方式的流程示意图;
图3为本申请提供的获取第一3D模型的方法的另一种实施方式的流程示意图;
图4为本申请提供的获取第一3D模型的方法的另一种实施方式的流程示意图;
图5为本申请提供的图像生成装置的一种实施方式的结构框图;
图6为本申请提供的芯片的一种实施方式的结构框图。
具体实施方式
下面结合附图,对本申请的技术方案进行描述。
在本申请的描述中,除非另有说明,“/”表示“或”的意思,例如,A/B可以表示A或B。本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。此外,“至少一个”是指一个或多个,“多个”是指两个或两个以上。“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
需要说明的是,本申请中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其他实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
为了便于理解本申请的技术方案,下面先对本申请提供的技术方案的应用场景进行示例性说明。
目前,终端设备上通常设置有前置摄像头,用户可以利用该前置摄像头进行自拍,可选的,用户可以对用户身体的任意部位进行自拍,例如人脸、脖子、手臂等等。
当用户处于光线不足的暗光环境中对脸部进行自拍时,自拍得到的人脸图像中立体感和皮肤细节通常会出现缺失,人脸图像中噪点较多,用户体验较差。为了增强人脸图像的立体感和皮肤细节的效果,可以在终端设备内部设置补光灯,当终端设备检测到拍摄环境的环境光亮度小于预设亮度阈值时,可以开启补光灯进行补光,或者,可以点亮终端设备的显示屏进行补光,不过,使用内部补光灯或显示屏补光时,人脸只能局部受光,光线仍然不足,得到的自拍图像仍然会出现立体感和皮肤细节缺失的现象。
此外,为了自拍得到的人脸图像中立体感和皮肤细节不会出现缺失,还可以在终端设备外部设置补光设备,例如,可以在终端设备上安装外置的补光设备,用户在暗光环境中进行自拍时,可以手动打开外部的补光设备,进行补光,从而提升自拍图像的拍摄效果,消除图像中立体感和皮肤细节缺失的现象。不过,在终端设备外部设置补光设备,一方面,用户需要额外携带补光设备,增加用户的负担,另一方面,用户需要不断手动开启或关闭补光设备,使用不方便,用户体验较差。
基于此,如何简单高效的消除光线不足对自拍图像的立体感和皮肤细节的缺失影响,成为本领域技术人员亟待解决的技术问题。
为了解决上述技术问题,本申请实施例提供如下技术方案,其具体内容可参见下文。
本申请实施例提供的技术方案可以应用于终端设备,终端设备可以是用户设备(user equipment,UE)。示例性地,UE可以是手机(mobile phone)、平板电脑(portable android device,Pad)、个人数字助理(personal digital assistant,PDA)等。
其中,终端设备上可以设置至少一个摄像装置,该至少一个摄像装置可以用于拍摄三维图像,还可以用于拍摄二维图像,例如,该至少一个摄像装置可以为深度相机或结构光相机等,并且该至少一个摄像装置可以包括前置摄像头,用户可以使用该前置摄像头进行自拍。需要说明的是,终端设备还可以包括更多或更少的部件,例如,终端设备还可以包括处理器、存储器、收发器、显示屏等,本申请对此不进行限定。
下面结合附图,对本申请实施例提供的技术方案进行具体介绍。
参见图1,图1为本申请提供的图像生成方法的一种实施方式的流程示意图,该方法包括以下步骤:
步骤S101、获取拍摄目标的第一图像和第一环境光角度。
其中,拍摄目标是指待进行拍摄的人、人体的某个部位或者物体等。例如,用户使用终端设备自拍时,拍摄目标可以为人脸、脖子、手臂等。需要说明的是,本申请提供的图像生成方法不限于自拍的应用场景,同样适用于对任意人或物的其它拍摄场景,本申请对此不进行限定。下文以自拍时,拍摄目标为人脸为例,对本申请提供的技术方案的实施例进行详细说明。
第一图像为拍摄目标的三维图像,自拍时,可以使用终端设备的前置摄像头拍摄得到。进一步,在用户自拍时,可以获取用户自拍生成的二维图像,同时获取拍摄目标的深度信息,然后根据该二维图像和该深度信息生成所述第一图像。第一环境光角度用于指示拍摄所述第一图像时拍摄环境中的照射光源与所述拍摄目标之间的相对位置关系。获取到拍摄目标的第一图像后,可以采用结构光技术分析第一图像,获取第一环境光角度,即采用结构光技术分析得到拍摄第一图像时,拍摄环境中的照射光源与拍摄目标之间的相对位置关系。
例如,终端设备的系统中可以建立如下三维坐标系:人脸的正前方为Z轴的正方向,人脸的正上方为Y轴的正方向,两耳的连线为X轴的方向,且X轴的正方向由左耳指向右耳。基于该三维坐标系,第一环境光角度可以为由Y轴正方向向X轴正方向偏转30度,表示照射光源位于人脸右耳侧偏离右耳30度的方向上;第一环境光角度还可以为由Y轴正方向先向X轴正方向偏转30度,然后再向Z轴正方向偏转20度,表示照射光源位于由人脸正上方向右耳侧偏转30度,然后再向人脸的正前方偏转20度的方向上;以此类推,第一环境光角度还可以为其它内容,此处不再一一列举。
此外,在本申请一些可选的实施例中,还可以首先获取拍摄目标的环境光亮度,即拍摄环境中照射光源的亮度;然后在环境光亮度小于预设亮度阈值时,获取拍摄目标的第一图像和所述第一环境光角度。也就是说,只有在暗光的环境中,终端设备才会获取拍摄目标的第一图像和所述第一环境光角度,并执行本方案的后续步骤,从而可以更加灵活高效的获得不存在立体感缺失和细节缺失的图像,用户体验更好。
其中,预设亮度阈值可以根据实际应用场景的需求设置。
步骤S102、获取第一3D模型。
其中,第一3D模型是指根据所述拍摄目标的深度信息、多个第二图像和第一打光信息,融合生成的所述拍摄目标的3D模型。
获取第一3D模型的实现方式可以包括多种,例如:
可选的,获取第一3D模型的实现方式可以参见图2,图2为本申请提供的获取第一 3D模型的方法的一种实施方式的流程示意图。结合图2可知,该方法可以包括以下步骤:
步骤S201、获取拍摄目标的深度信息。
其中,拍摄目标的深度信息是指拍摄目标在视觉深度上的信息,例如,人脸上鼻子的鼻尖突出面部的高度,人脸上额头突出面部的高度,人脸上嘴巴的唇珠突出面部的高度等。
终端设备可以利用设置于该终端设备的摄像装置,例如前置摄像头,对拍摄目标进行三维扫描,获取拍摄目标的深度信息。
步骤S202、获取所述拍摄目标的多个第二图像。
其中,所述多个第二图像是指从所述拍摄目标的多个角度拍摄所述拍摄目标得到的多个二维图像。
从所述拍摄目标的多个角度拍摄所述拍摄目标得到多个二维图像是指多次改变摄像装置与拍摄目标之间的相对位置关系对拍摄目标进行拍摄,每次拍摄后得到一个二维图像,多次拍摄后得到多个二维图像。
以自拍时拍摄目标为人脸举例说明,可以从人脸的正前方、人脸的两侧、人脸的上方、人脸的下方等多个角度分别对人脸进行拍摄,得到人脸的多个二维图像。
在执行步骤S202时,获取拍摄目标的多个第二图像的实现方式可以包括多种,例如:
示例性的,可以在首次按照本申请图1所示的方法生成图像之前,预先从拍摄目标的多个角度拍摄所述拍摄目标,即多次改变终端设备的摄像装置与拍摄目标之间的相对位置,每次改变相对位置后拍摄得到一个第二图像,多次改变相对位置后得到多个第二图像,然后将得到的多个第二图像存储于终端设备中,之后在执行步骤S202时,可以直接从终端设备中读取预先存储的多个第二图像,例如,可以预先从人脸的正前方、正上方、由人脸的正前方向人脸的左侧偏转任意角度、由人脸的正前方向人脸的右侧偏转任意角度、由人脸的正前方向人脸的上方偏转任意角度、以及由人脸的正前方向人脸的下方偏转任意角度等,自拍获得人脸的多个二维图像,然后将该人脸的多个二维图像作为第二图像存储于终端设备中,在执行步骤S202时,可以直接从终端设备中读取预先存储的人脸的多个二维图像。
示例性的,还可以在执行步骤S202时,首先输出一个提示信息,该提示信息用于提示用户使用终端设备的摄像装置从拍摄目标的多个角度拍摄拍摄目标,得到多个拍摄目标的二维图像;然后,将用户使用终端设备的摄像装置从拍摄目标的多个角度拍摄拍摄目标得到的多个二维图像确定为第二图像,从而获取到多个第二图像。
其中,提示信息可以为文字信息,也可以为语音信息等,本申请对此不进行限定。
示例性的,还可以在首次按照本申请图1所示的方法生成图像时,在需要获取拍摄目标的多个第二图像时,首先输出一个提示信息,提示用户使用终端设备的摄像装置从拍摄目标的多个角度拍摄拍摄目标,得到所述拍摄目标的多个第二图像。然后,可以将首次得到的多个第二图像存储于终端设备中,之后,每次需要获取所述拍摄目标的多个第二图像时,直接从终端设备中读取预先存储的所述多个第二图像即可。
步骤S203、根据所述深度信息和所述多个第二图像融合生成第二3D模型。
终端设备获取到拍摄目标的深度信息和多个第二图像后,可以通过融合处理技术,根据所述深度信息和所述多个第二图像融合生成拍摄目标的第二3D模型。
步骤S204、获取第一打光信息。
其中,第一打光信息是指打光光源从第一光照角度对待打光目标打光的信息。所述打光光源可以为实体光源,也可以为根据3D打光技术设置的虚拟光源,待打光目标可以为拍摄目标,例如人脸,也可以为虚拟目标,例如所述第二3D模型。
第一打光信息可以包括第一光照角度、第一光强,还可以包括第一色温等。所述第一光照角度等于所述第一环境光角度。所述第一光强对应的亮度大于等于所述预设亮度阈值,可以弥补暗光环境中照射光源亮度不足的缺陷,避免拍摄得到的图像出现立体感和细节缺失的现象。所述第一色温为打光光源的色温,可以等于拍摄环境中照射光源的色温。
获取第一打光信息的方式可以包括多种,例如:
示例性的,可以在首次按照本申请图1所示的方法生成图像之前,预先基于多个第二光照角度,使用3D打光技术,计算得到从每个第二光照角度对所述第二3D模型模拟打光的第二打光信息,从而得到多个第二打光信息,每一个第二打光信息对应一个不同的第二光照角度,然后将每一个第二打光信息与其对应的第二光照角度对应存储于终端设备中。之后在执行步骤S204时,可以从预先存储的多个第二打光信息中读取与第一环境光角度相同的第二光照角度对应的第二打光信息,将该第二打光信息确定为第一打光信息。
示例性的,还可以在执行步骤S204时,将所述第一环境光角度确定为第一光照角度,使用3D打光技术,计算得到从第一光照角度对所述第二3D模型模拟打光的打光信息,将该打光信息确定为第一打光信息。
示例性的,还可以在首次按照本申请图1所示的方法生成图像之前,提示用户预先使用外置打光光源从多个第二光照角度对拍摄目标打光,利用摄像装置获取从每个第二光照角度打光的第二打光信息,得到多个第二打光信息,每一个第二打光信息对应一个不同的第二光照角度,然后将每一个第二打光信息与其对应的第二光照角度对应存储于终端设备中。之后在执行步骤S204时,可以从预先存储的多个第二打光信息中读取与第一环境光角度相同的第二光照角度对应的第二打光信息,将该第二打光信息确定为第一打光信息。其中,外置打光光源的亮度大于等于所述预设亮度阈值,可以弥补暗光环境中照射光源亮度不足的缺陷,避免拍摄得到的图像出现立体感和细节缺失的现象。
步骤S205、根据所述第一打光信息和所述第二3D模型融合生成第一3D模型。
获取到第一打光信息和第二3D模型后,可以结合3D打光技术,根据所述第一打光信息和所述第二3D模型,融合生成第一3D模型。
需要说明的是,还可以在首次按照本申请图1所示的方法生成图像之前,预先按照图2所示实施例中步骤S201至步骤S203所示的方法,生成所述第二3D模型,并将所述第二3D模型存储于终端设备中,然后在每次按照本申请图2所示的方法获取第一3D模型时,不再执行步骤S201至步骤S203,而是直接从终端设备中读取预先存储的所述第二3D模型,这样,可以更加快速便捷的获取到第一3D模型,用户体验更好。
或者,还可以在首次按照本申请图1所示的方法生成图像时,按照图2所示实施例中的步骤S201至步骤S203所示的方法,生成所述第二3D模型,然后将所述第二3D模型存储于终端设备中,之后每次按照本申请图2所示的方法获取第一3D模型时,不再执行步骤S201至步骤S203,而是直接从终端设备中读取存储的所述第二3D模型。
可选的,获取第一3D模型的实现方式还可以参见图3,图3为本申请提供的获取第一3D模型的方法的另一种实施方式的流程示意图。结合图3可知,该方法可以包括以下步骤:
步骤S301、获取拍摄目标的深度信息。
步骤S302、获取所述拍摄目标的多个第二图像。
步骤S303、根据所述深度信息和所述多个第二图像融合生成第二3D模型。
步骤S301至步骤S303的实现方式可以参考图2所示实施例中步骤S201至步骤S203的实现方式,此处不再赘述。
步骤S304、获取多个第二打光信息。
其中,所述多个第二打光信息是指所述打光光源从多个第二光照角度对待打光目标打光得到的多个打光信息,每一个第二打光信息对应一个不同的第二光照角度。
每一个第二打光信息可以包括第二光照角度、第二光强和第二色温,其中,第二光强对应的亮度大于等于所述预设亮度阈值,可以弥补暗光环境中照射光源亮度不足的缺陷,避免拍摄得到的图像出现立体感和细节缺失的现象,第二色温等于拍摄环境中照射光源的色温。
获取多个第二打光信息的实现方式可以包括多种,例如:
示例性的,可以在首次按照本申请图1所示的方法生成图像之前,预先基于多个第二光照角度,使用3D打光技术,计算得到从每个第二光照角度对所述第二3D模型模拟打光的第二打光信息,得到多个第二打光信息,每一个第二打光信息对应一个第二光照角度,然后将每一个第二打光信息与其对应的第二光照角度对应存储于终端设备中。之后在执行步骤S304时,可以直接从终端设备中读取预先存储的所述多个第二打光信息。
示例性的,还可以在首次按照本申请图1所示的方法生成图像之前,提示用户预先使用外置打光光源从多个第二光照角度对拍摄目标打光,利用摄像装置获取从每一个第二光照角度打光的打光信息,然后将该打光信息作为第二打光信息与其对应的第二光照角度对应存储于终端设备中。在执行步骤S304时,可以直接从终端设备中读取预先存储的所述多个第二打光信息。
示例性的,还可以在执行步骤S304时,基于多个第二光照角度,使用3D打光技术,计算得到从每个第二光照角度对所述第二3D模型模拟打光的第二打光信息,得到多个第二打光信息。
步骤S305、根据所述多个第二打光信息和所述第二3D模型融合生成多个第三3D模型。
获取到多个第二打光信息和第二3D模型后,可以结合3D打光技术,根据每一个第二打光信息和所述第二3D模型,融合生成一个第三3D模型。基于每一个第二打光信息对应一个不同的第二光照角度,每一个第三3D模型也对应一个不同的第二光照角度。
步骤S306、从所述多个第三3D模型中获取第一3D模型。
获取到多个第三3D模型后,可以从得到的多个第三3D模型中挑选出与第一环境光角度相同的第二光照角度对应的第三3D模型,将该第三3D模型确定为第一3D模型。
需要说明的是,还可以在首次按照本申请图1所示的方法生成图像之前,预先按照图3所示实施例中步骤S301至步骤S303所示的方法,生成所述第二3D模型,并将所述第二3D模型存储于终端设备中,然后在每次按照本申请图3所示的方法获取第一3D模型时,不再执行步骤S301至步骤S303,而是直接从终端设备中读取预先存储的所述第二3D模型,这样,可以更加快速便捷的获取到第一3D模型,用户体验更好。
或者,还可以在首次按照本申请图1所示的方法生成图像时,按照图3所示实施例中 的步骤S301至步骤S303所示的方法,生成所述第二3D模型,然后将所述第二3D模型存储于终端设备中,之后每次按照本申请图3所示的方法获取第一3D模型时,不再执行步骤S301至步骤S303,而是直接从终端设备中读取存储的所述第二3D模型。
同理,还可以在首次按照本申请图1所示的方法生成图像之前,预先按照图3所示实施例中步骤S301至步骤S305所示的方法,生成所述多个第三3D模型,并将每一个所述第三3D模型与其对应的第二光照角度对应存储于终端设备中,然后在每次按照本申请图3所示的方法获取第一3D模型时,不再执行步骤S301至步骤S305,而是直接从终端设备中读取预先存储的所述多个第三3D模型,这样,可以更加快速便捷的获取到第一3D模型,用户体验更好。
或者,还可以在首次按照本申请图1所示的方法生成图像时,按照图3所示实施例中的步骤S301至步骤S305所示的方法,生成所述多个第三3D模型,然后将每一个所述第三3D模型与其对应的第二光照角度对应存储于终端设备中,之后每次按照本申请图3所示的方法获取第一3D模型时,不再执行步骤S301至步骤S305,而是直接从终端设备中读取存储的所述多个第三3D模型。
可选的,获取第一3D模型的实现方式还可以参见图4,图4为本申请提供的获取第一3D模型的方法的另一种实施方式的流程示意图。结合图4可知,该方法可以包括以下步骤:
步骤S401、获取拍摄目标的深度信息。
步骤S402、获取所述拍摄目标的多个第二图像。
步骤S401至步骤S402的实现方式可以参考图2所示实施例中步骤S201至步骤S202的实现方式,此处不再赘述。
步骤S403、获取第一打光信息。
步骤S403的实现方式可以参考图2所示实施例中步骤S204的实现方式,此处不再赘述。
步骤S404、根据所述深度信息、所述多个第二图像和所述第一打光信息,融合生成第一3D模型。
获取到拍摄目标的深度信息、多个第二图像和第一打光信息后,可以结合3D打光技术和融合处理技术,根据所述深度信息、所述多个第二图像和所述第一打光信息,融合生成第一3D模型。
步骤S103、根据所述第一图像和所述第一3D模型,融合生成所述拍摄目标的第三图像。
获取到第一图像和第一3D模型后,可以根据第一图像和第一3D模型,融合生成拍摄目标的第三图像。第三图像为三维图像,第三图像对应的二维图像可以作为用户实际使用的拍摄图像,可以在终端设备的显示屏上显示第三图像对应的二维图像。
需要说明的是,如果拍摄环境中照射光源的亮度大于等于所述预设亮度阈值,可以不使用本申请图1所示的方法获取二维图像,直接将用户拍摄得到的二维图像显示在显示屏上,也可以使用本申请图1所示的方法,首先生成第三图像,然后将第三图像对应的二维图像显示在显示屏上,本申请对此不进行限定。
可选的,还可以在拍摄目标的第一3D模型、第二3D模型和第三3D模型上标定关键点,例如,可以在人脸模型上标定出眼睛、鼻子、嘴巴等关键点。同样,还可以在第一图 像上标定关键点,这样,在融合生成第三图像之前,可以先对第一图像与第一3D模型进行关键点的匹配,然后根据匹配后的第一图像和第一3D模型融合生成拍摄目标的第三图像。
本申请实施例提供的图像生成方法,可以获取拍摄目标的第一图像和第一图像对应的第一环境光角度,并且可以获取根据拍摄目标的深度信息、多个第二图像和根据亮度足够的打光光源从所述第一环境光角度对所述拍摄目标打光的打光信息生成的第一3D模型,然后可以根据第一图像和第一3D模型生成拍摄目标的第三图像。采用该方法,用户使用终端设备在暗光的环境中自拍时,终端设备可以使用根据拍摄目标的深度信息、多个第二图像和根据亮度足够的打光光源从第一环境光角度对所述拍摄目标打光的打光信息生成的3D模型,与实际拍摄的三维图像进行融合,从而使得实际得到的自拍图像立体感和细节效果更好,不会产生立体感和皮肤细节缺失的现象,用户体验更好。
本文中描述的各个方法实施例可以为独立的方案,也可以根据内在逻辑进行组合,这些方案都落入本申请的保护范围中。
可以理解的是,上述各个方法实施例中,由终端设备实现的方法和操作,也可以由可用于终端设备的部件(例如芯片或者电路)实现。
上述主要从每一个网元之间交互的角度对本申请实施例提供的方案进行了介绍。可以理解的是,每一个网元,例如终端设备,为了实现上述功能,其包含了执行每一个功能相应的硬件结构或软件模块,或两者结合。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例对终端设备进行功能模块的划分,例如,可以对应每一个功能划分每一个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。下面以采用对应每一个功能划分每一个功能模块为例进行说明。
以上,结合图1至图4详细说明了本申请实施例提供的方法。以下,结合图5和图6详细说明本申请实施例提供的装置。应理解,装置实施例的描述与方法实施例的描述相互对应,因此,未详细描述的内容可以参见上文方法实施例,为了简洁,这里不再赘述。
参见图5,图5为本申请提供的图像生成装置的一种实施方式的结构框图。如图5所示,该装置500可以包括:第一获取模块501、第二获取模块502和融合模块503。该装置500可以用于执行上文方法实施例中终端设备所执行的动作。
例如:第一获取模块501,用于获取拍摄目标的第一图像和第一环境光角度;所述第一图像为所述拍摄目标的三维图像;所述第一环境光角度用于指示拍摄所述第一图像时拍摄环境中的照射光源与所述拍摄目标之间的相对位置关系;
第二获取模块502,用于获取第一3D模型;所述第一3D模型为根据所述拍摄目标的深度信息、多个第二图像和第一打光信息,融合生成的所述拍摄目标的3D模型;所述多 个第二图像是指从所述拍摄目标的多个角度拍摄所述拍摄目标得到的多个二维图像;所述第一打光信息包括第一光照角度和第一光强,所述第一光照角度等于所述第一环境光角度,所述第一光强对应的亮度大于等于预设亮度阈值;
融合模块503,用于根据所述第一图像和所述第一3D模型,融合生成所述拍摄目标的第三图像。
可选的,所述第二获取模块502具体用于:获取预置的第二3D模型;所述第二3D模型为根据所述深度信息和所述多个第二图像融合生成的所述拍摄目标的3D模型;获取所述第一打光信息;根据所述第一打光信息和所述第二3D模型融合生成所述第一3D模型。
可选的,所述融合模块503还用于:获取所述深度信息;获取所述多个第二图像;根据所述深度信息和所述多个第二图像融合生成所述第二3D模型。
可选的,所述第二获取模块502具体用于:从预置的多个第三3D模型中获取所述第一3D模型;所述多个第三3D模型为根据所述深度信息、所述多个第二图像和多个第二打光信息生成的多个所述拍摄目标的3D模型;每一个所述第二打光信息包含一个不同的第二光照角度;所述第一3D模型为与所述第一环境光角度相同的第二光照角度对应的第三3D模型。
可选的,所述第一获取模块501具体用于:获取所述拍摄目标的环境光亮度;如果所述环境光亮度小于所述预设亮度阈值,获取所述拍摄目标的第一图像和第一环境光角度。
可选的,所述装置500还可以包括:标定模块,用于在所述第一3D模型上标定关键点;所述融合模块503具体用于:对所述第一图像与所述第一3D模型进行所述关键点的匹配;根据匹配后的所述第一图像与所述第一3D模型融合生成所述第三图像。
也就是说,该装置500可实现对应于根据本申请实施例的图1、图2、图3或图4所示方法中的终端设备执行的步骤或者流程,该装置500可以包括用于执行图1、图2、图3或图4所示方法中的终端设备执行的方法的模块。并且,该装置500中的各模块和上述其他操作和/或功能分别为了实现图1、图2、图3或图4所示方法的相应步骤。例如,一种可能的设计中,该装置500中的第一获取模块501可以用于执行图1所示方法中的步骤S101,第二获取模块502可以用于执行图1所示方法中的步骤S102,融合模块503可以用于执行图1所示方法中的步骤S103。另一种可能的设计中,该装置500中的第二获取模块502还可以用于执行图2所示方法中的步骤S201至步骤S205。另一种可能的设计中,该装置500中的第二获取模块502还可以用于执行图3所示方法中的步骤S301至步骤S306。另一种可能的设计中,该装置500中的第二获取模块502还可以用于执行图4所示方法中的步骤S401至步骤S404。
应理解,各模块执行上述相应步骤的具体过程在上述方法实施例中已经详细说明,为了简洁,在此不再赘述。
此外,该装置500可以为终端设备,该终端设备可以执行上述方法实施例中终端设备的功能,或者实现上述方法实施例中终端设备执行的步骤或者流程。
该终端设备可以包括处理器和收发器。可选的,该终端设备还可以包括存储器。其中,处理器、收发器和存储器之间可以通过内部连接通路互相通信,传递控制和/或数据信号,该存储器用于存储计算机程序或指令,该处理器用于从该存储器中调用并运行该计算机程序或指令,以控制该收发器接收信号和/或发送信号。可选的,终端设备还可以包括天线, 用于将收发器输出的上行数据或上行控制信令通过无线信号发送出去。
上述处理器可以和存储器合成一个处理装置,处理器用于执行存储器中存储的计算机程序或指令来实现上述功能。具体实现时,该存储器也可以集成在处理器中,或者独立于处理器。该处理器可以与图5中的融合模块对应。
上述收发器也可以称为收发单元。收发器可以包括接收器(或称接收机、接收电路)和/或发射器(或称发射机、发射电路)。其中,接收器用于接收信号,发射器用于发送信号。
应理解,上述终端设备能够实现上文所示方法实施例中涉及终端设备的各个过程。终端设备中的各个模块的操作和/或功能,分别为了实现上述方法实施例中的相应流程。具体可参见上述方法实施例中的描述,为避免重复,此处适当省略详述描述。
可选的,上述终端设备还可以包括电源,用于给终端设备中的各种器件或电路提供电源。
除此之外,为了使得上述终端设备的功能更加完善,该终端设备还可以包括输入单元、显示单元、音频电路、摄像头和传感器等中的一个或多个,所述音频电路还可以包括扬声器、麦克风等。
本申请实施例还提供了一种处理装置,包括处理器和接口。所述处理器可用于执行上述方法实施例中的方法。
应理解,上述处理装置可以是一个芯片。例如,参见图6,图6为本申请提供的芯片的一种实施方式的结构框图。图6所示的芯片可以为通用处理器,也可以为专用处理器。该芯片600包括处理器601。其中,处理器601可以用于支持图5所示的装置执行图1、图2、图3或图4所示的技术方案。
可选的,该芯片600还可以包括收发器602,收发器602用于接受处理器601的控制,用于支持图5所示的装置执行图1、图2、图3或图4所示的技术方案。可选的,图6所示的芯片600还可以包括:存储介质603。
需要说明的是,图6所示的芯片可以使用下述电路或者器件来实现:一个或多个现场可编程门阵列(field programmable gate array,FPGA)、可编程逻辑器件(programmable logic device,PLD)、专用集成芯片(application specific integrated circuit,ASIC)、系统芯片(system on chip,SoC)、中央处理器(central processor unit,CPU)、网络处理器(network processor,NP)、数字信号处理电路(digital signal processor,DSP)、微控制器(micro controller unit,MCU),控制器、状态机、门逻辑、分立硬件部件、任何其他适合的电路、或者能够执行本申请通篇所描述的各种功能的电路的任意组合。
在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。
应注意,本申请实施例中的处理器可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软 件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
可以理解,本申请实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
根据本申请实施例提供的方法,本申请实施例还提供一种计算机程序产品,该计算机程序产品包括:计算机程序或指令,当该计算机程序或指令在计算机上运行时,使得该计算机执行图1、图2、图3或图4所示实施例中任意一个实施例的方法。
根据本申请实施例提供的方法,本申请实施例还提供一种计算机存储介质,该计算机存储介质存储有计算机程序或指令,当该计算机程序或指令在计算机上运行时,使得该计算机执行图1、图2、图3或图4所示实施例中任意一个实施例的方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机程序或指令。在计算机上加载和执行所述计算机程序或指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机程序或指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机程序或指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,高密度数字视频光盘(digital video disc,DVD))、或者半导体介质(例如,固态硬盘(solid state disc,SSD))等。
在本说明书中使用的术语“部件”、“模块”、“系统”等用于表示计算机相关的实体、硬件、固件、硬件和软件的组合、软件、或执行中的软件。例如,部件可以是但不限于,在处理器上运行的进程、处理器、对象、可执行文件、执行线程、程序和/或计算机。通过图示,在计算设备上运行的应用和计算设备都可以是部件。一个或多个部件可驻留在进程和/或执行线程中,部件可位于一个计算机上和/或分布在两个或更多个计算机之间。此外,这些部件可从在上面存储有各种数据结构的各种计算机可读介质执行。部件可例如根据具有一个或多个数据分组(例如来自与本地系统、分布式系统和/或网络间的另一部件交互的二个部件的数据,例如通过信号与其它系统交互的互联网)的信号通过本地和/或远程进程来通信。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各种说明性逻辑块(illustrative logical block)和步骤(step),能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理单元中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
上述本申请实施例提供的图像生成装置、终端设备、计算机存储介质、计算机程序产品、芯片均用于执行上文所提供的方法,因此,其所能达到的有益效果可参考上文所提供的方法对应的有益效果,在此不再赘述。
应理解,在本申请的各个实施例中,各步骤的执行顺序应以其功能和内在逻辑确定,各步骤序号的大小并不意味着执行顺序的先后,不对实施例的实施过程构成限定。
本说明书的各个部分均采用递进的方式进行描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点介绍的都是与其他实施例不同之处。尤其,图像生成装置、终端设备、计算机存储介质、计算机程序产品、芯片的实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例中的说明即可。
尽管已描述了本申请的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请范围的所有变更和修改。
以上所述的本申请实施方式并不构成对本申请保护范围的限定。

Claims (17)

  1. 一种图像生成方法,其特征在于,包括:
    获取拍摄目标的第一图像和第一环境光角度;所述第一图像为所述拍摄目标的三维图像;所述第一环境光角度用于指示拍摄所述第一图像时拍摄环境中的照射光源与所述拍摄目标之间的相对位置关系;
    获取第一3D模型;所述第一3D模型为根据所述拍摄目标的深度信息、多个第二图像和第一打光信息,融合生成的所述拍摄目标的3D模型;所述多个第二图像是指从所述拍摄目标的多个角度拍摄所述拍摄目标得到的多个二维图像;所述第一打光信息包括第一光照角度和第一光强,所述第一光照角度等于所述第一环境光角度,所述第一光强对应的亮度大于等于预设亮度阈值;
    根据所述第一图像和所述第一3D模型,融合生成所述拍摄目标的第三图像。
  2. 根据权利要求1所述的图像生成方法,其特征在于,所述获取第一3D模型,包括:
    获取预置的第二3D模型;所述第二3D模型为根据所述深度信息和所述多个第二图像融合生成的所述拍摄目标的3D模型;
    获取所述第一打光信息;
    根据所述第一打光信息和所述第二3D模型融合生成所述第一3D模型。
  3. 根据权利要求2所述的图像生成方法,其特征在于,所述方法还包括:
    获取所述深度信息;
    获取所述多个第二图像;
    根据所述深度信息和所述多个第二图像融合生成所述第二3D模型。
  4. 根据权利要求1所述的图像生成方法,其特征在于,所述获取第一3D模型,包括:
    从预置的多个第三3D模型中获取所述第一3D模型;所述多个第三3D模型为根据所述深度信息、所述多个第二图像和多个第二打光信息生成的多个所述拍摄目标的3D模型;每一个所述第二打光信息包含一个不同的第二光照角度;所述第一3D模型为与所述第一环境光角度相同的第二光照角度对应的第三3D模型。
  5. 根据权利要求1至4任意一项所述的图像生成方法,其特征在于,所述获取拍摄目标的第一图像和第一环境光角度,包括:
    获取所述拍摄目标的环境光亮度;
    如果所述环境光亮度小于所述预设亮度阈值,获取所述拍摄目标的第一图像和第一环境光角度。
  6. 根据权利要求1至5任意一项所述的图像生成方法,其特征在于,所述方法还包括:
    在所述第一3D模型上标定关键点;
    根据所述第一图像和所述第一3D模型,融合生成所述拍摄目标的第三图像,包括:
    对所述第一图像与所述第一3D模型进行所述关键点的匹配;
    根据匹配后的所述第一图像与所述第一3D模型融合生成所述第三图像。
  7. 一种图像生成装置,其特征在于,包括:
    第一获取模块,用于获取拍摄目标的第一图像和第一环境光角度;所述第一图像为所述拍摄目标的三维图像;所述第一环境光角度用于指示拍摄所述第一图像时拍摄环境中的照射光源与所述拍摄目标之间的相对位置关系;
    第二获取模块,用于获取第一3D模型;所述第一3D模型为根据所述拍摄目标的深度信息、多个第二图像和第一打光信息,融合生成的所述拍摄目标的3D模型;所述多个第二图像是指从所述拍摄目标的多个角度拍摄所述拍摄目标得到的多个二维图像;所述第一打光信息包括第一光照角度和第一光强,所述第一光照角度等于所述第一环境光角度,所述第一光强对应的亮度大于等于预设亮度阈值;
    融合模块,用于根据所述第一图像和所述第一3D模型,融合生成所述拍摄目标的第三图像。
  8. 根据权利要求7所述的图像生成装置,其特征在于,所述第二获取模块具体用于:
    获取预置的第二3D模型;所述第二3D模型为根据所述深度信息和所述多个第二图像融合生成的所述拍摄目标的3D模型;
    获取所述第一打光信息;
    根据所述第一打光信息和所述第二3D模型融合生成所述第一3D模型。
  9. 根据权利要求8所述的图像生成装置,其特征在于,所述融合模块还用于:
    获取所述深度信息;
    获取所述多个第二图像;
    根据所述深度信息和所述多个第二图像融合生成所述第二3D模型。
  10. 根据权利要求7所述的图像生成装置,其特征在于,所述第二获取模块具体用于:
    从预置的多个第三3D模型中获取所述第一3D模型;所述多个第三3D模型为根据所述深度信息、所述多个第二图像和多个第二打光信息生成的多个所述拍摄目标的3D模型;每一个所述第二打光信息包含一个不同的第二光照角度;所述第一3D模型为与所述第一环境光角度相同的第二光照角度对应的第三3D模型。
  11. 根据权利要求7至10任意一项所述的图像生成装置,其特征在于,所述第一获取模块具体用于:
    获取所述拍摄目标的环境光亮度;
    如果所述环境光亮度小于所述预设亮度阈值,获取所述拍摄目标的第一图像和第一环境光角度。
  12. 根据权利要求7至11任意一项所述的图像生成装置,其特征在于,所述装置还包括:
    标定模块,用于在所述第一3D模型上标定关键点;
    所述融合模块具体用于:
    对所述第一图像与所述第一3D模型进行所述关键点的匹配;
    根据匹配后的所述第一图像与所述第一3D模型融合生成所述第三图像。
  13. 一种装置,其特征在于,包括处理器和存储器;
    所述处理器,用于执行所述存储器中存储的计算机程序或指令,当所述计算机程 序或指令被执行时,如权利要求1至6中任意一项所述的方法被执行。
  14. 一种装置,其特征在于,包括处理器、收发器和存储器;
    所述收发器,用于接收信号或者发送信号;所述处理器,用于执行所述存储器中存储的计算机程序或指令,当所述计算机程序或指令被执行时,使得所述装置实现权利要求1至6中任意一项所述的方法。
  15. 一种计算机存储介质,其特征在于,包括计算机程序或指令,当所述计算机程序或指令被执行时,如权利要求1至6中任意一项所述的方法被执行。
  16. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得计算机执行如权利要求1至6中任意一项所述的方法。
  17. 一种芯片,其特征在于,包括处理器,所述处理器与存储器耦合,用于执行所述存储器中存储的计算机程序或指令,当所述计算机程序或指令被执行时,如权利要求1至6中任意一项所述的方法被执行。
PCT/CN2021/087574 2020-04-30 2021-04-15 图像生成方法及装置 WO2021218649A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21796787.6A EP4131933A4 (en) 2020-04-30 2021-04-15 IMAGE PRODUCTION METHOD AND APPARATUS
US17/922,246 US20230177768A1 (en) 2020-04-30 2021-04-15 Image Generation Method and Apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010364283.3 2020-04-30
CN202010364283.3A CN111556255B (zh) 2020-04-30 2020-04-30 图像生成方法及装置

Publications (1)

Publication Number Publication Date
WO2021218649A1 true WO2021218649A1 (zh) 2021-11-04

Family

ID=72004301

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/087574 WO2021218649A1 (zh) 2020-04-30 2021-04-15 图像生成方法及装置

Country Status (4)

Country Link
US (1) US20230177768A1 (zh)
EP (1) EP4131933A4 (zh)
CN (1) CN111556255B (zh)
WO (1) WO2021218649A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111556255B (zh) * 2020-04-30 2021-10-01 华为技术有限公司 图像生成方法及装置
CN112788244B (zh) * 2021-02-09 2022-08-09 维沃移动通信(杭州)有限公司 拍摄方法、拍摄装置和电子设备
CN112967201B (zh) * 2021-03-05 2024-06-25 厦门美图之家科技有限公司 图像光照调节方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107566751A (zh) * 2017-09-30 2018-01-09 北京金山安全软件有限公司 图像处理方法、装置、电子设备及介质
US10102654B1 (en) * 2015-07-28 2018-10-16 Cascade Technologies, Inc. System and method for a scalable interactive image-based visualization environment of computational model surfaces
CN108682050A (zh) * 2018-08-16 2018-10-19 Oppo广东移动通信有限公司 基于三维模型的美颜方法和装置
CN109242947A (zh) * 2017-07-11 2019-01-18 中慧医学成像有限公司 三维超声图像显示方法
CN111556255A (zh) * 2020-04-30 2020-08-18 华为技术有限公司 图像生成方法及装置

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8686981B2 (en) * 2010-07-26 2014-04-01 Apple Inc. Display brightness control based on ambient light angles
US11632520B2 (en) * 2011-11-14 2023-04-18 Aaron Chien LED light has built-in camera-assembly to capture colorful digital-data under dark environment
CN104994371B (zh) * 2015-06-25 2017-03-01 苏州佳世达光电有限公司 影像获取装置及影像调整方法
CN107438165A (zh) * 2016-05-25 2017-12-05 鸿富锦精密电子(郑州)有限公司 具有自拍辅助功能的电子装置及自拍辅助方法
CN106910247B (zh) * 2017-03-20 2020-10-02 厦门黑镜科技有限公司 用于生成三维头像模型的方法和装置
IT201700033593A1 (it) * 2017-03-27 2018-09-27 3Dflow Srl Metodo di generazione di un modello 3D basato su elaborazione Structure from Motion e stereo fotometrico di immagini 2D sparse
US10210664B1 (en) * 2017-05-03 2019-02-19 A9.Com, Inc. Capture and apply light information for augmented reality
CN107506714B (zh) * 2017-08-16 2021-04-02 成都品果科技有限公司 一种人脸图像重光照的方法
KR102370763B1 (ko) * 2017-09-26 2022-03-04 삼성전자주식회사 외부 광에 기반하여 카메라를 제어하는 전자 장치 및 제어 방법
CN107580209B (zh) * 2017-10-24 2020-04-21 维沃移动通信有限公司 一种移动终端的拍照成像方法及装置
US10643375B2 (en) * 2018-02-26 2020-05-05 Qualcomm Incorporated Dynamic lighting for objects in images
CN108537870B (zh) * 2018-04-16 2019-09-03 太平洋未来科技(深圳)有限公司 图像处理方法、装置及电子设备
CN108573480B (zh) * 2018-04-20 2020-02-11 太平洋未来科技(深圳)有限公司 基于图像处理的环境光补偿方法、装置及电子设备
CN109304866A (zh) * 2018-09-11 2019-02-05 魏帅 使用3d摄像头自助拍照打印3d人像的一体设备及方法
CN109325437B (zh) * 2018-09-17 2021-06-22 北京旷视科技有限公司 图像处理方法、装置和系统
CN109447931B (zh) * 2018-10-26 2022-03-15 深圳市商汤科技有限公司 图像处理方法及装置
CN113206921B (zh) * 2020-01-31 2024-02-27 株式会社美迪特 外部光干扰去除方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10102654B1 (en) * 2015-07-28 2018-10-16 Cascade Technologies, Inc. System and method for a scalable interactive image-based visualization environment of computational model surfaces
CN109242947A (zh) * 2017-07-11 2019-01-18 中慧医学成像有限公司 三维超声图像显示方法
CN107566751A (zh) * 2017-09-30 2018-01-09 北京金山安全软件有限公司 图像处理方法、装置、电子设备及介质
CN108682050A (zh) * 2018-08-16 2018-10-19 Oppo广东移动通信有限公司 基于三维模型的美颜方法和装置
CN111556255A (zh) * 2020-04-30 2020-08-18 华为技术有限公司 图像生成方法及装置

Also Published As

Publication number Publication date
CN111556255A (zh) 2020-08-18
US20230177768A1 (en) 2023-06-08
CN111556255B (zh) 2021-10-01
EP4131933A4 (en) 2023-10-25
EP4131933A1 (en) 2023-02-08

Similar Documents

Publication Publication Date Title
WO2021218649A1 (zh) 图像生成方法及装置
US11132837B2 (en) Immersive content production system with multiple targets
WO2020192458A1 (zh) 一种图像处理的方法及头戴式显示设备
US11861797B2 (en) Method and apparatus for transmitting 3D XR media data
WO2018014766A1 (zh) 增强现实模块的生成方法及装置、生成系统和存储介质
JP6560740B2 (ja) バーチャルリアリティヘッドマウントディスプレイ機器ソフトウェアをテストする方法、装置、プログラム、及び記録媒体
CN107113415A (zh) 用于多技术深度图获取和融合的方法和设备
CN110335307B (zh) 标定方法、装置、计算机存储介质和终端设备
CN109582122B (zh) 增强现实信息提供方法、装置及电子设备
WO2021098358A1 (zh) 一种虚拟现实系统
US20140168358A1 (en) Multi-device alignment for collaborative media capture
CN111311757B (zh) 一种场景合成方法、装置、存储介质及移动终端
CN103985157A (zh) 一种结构光三维扫描方法及系统
CN109544458B (zh) 鱼眼图像校正方法、装置及其存储介质
WO2023207379A1 (zh) 图像处理方法、装置、设备及存储介质
CN113724309A (zh) 图像生成方法、装置、设备及存储介质
KR20170107137A (ko) 복수의 영상 데이터를 이용하는 헤드 마운트 디스플레이 장치 및 복수의 영상 데이터를 송수신하기 위한 시스템
WO2023216619A1 (zh) 3d 显示方法和 3d 显示设备
KR20230043741A (ko) 물체 스캐닝을 위한 커버리지를 사용한 피드백
US11538214B2 (en) Systems and methods for displaying stereoscopic rendered image data captured from multiple perspectives
CN116194792A (zh) 连接评估系统
US8755819B1 (en) Device location determination using images
RU2782312C1 (ru) Способ обработки изображения и устройство отображения, устанавливаемое на голове
US20240078743A1 (en) Stereo Depth Markers
US20230368475A1 (en) Multi-Device Content Handoff Based on Source Device Position

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21796787

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021796787

Country of ref document: EP

Effective date: 20221031

NENP Non-entry into the national phase

Ref country code: DE