CN107580209B - Photographing imaging method and device of mobile terminal - Google Patents

Photographing imaging method and device of mobile terminal Download PDF

Info

Publication number
CN107580209B
CN107580209B CN201711000224.2A CN201711000224A CN107580209B CN 107580209 B CN107580209 B CN 107580209B CN 201711000224 A CN201711000224 A CN 201711000224A CN 107580209 B CN107580209 B CN 107580209B
Authority
CN
China
Prior art keywords
image
target face
light source
mobile terminal
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711000224.2A
Other languages
Chinese (zh)
Other versions
CN107580209A (en
Inventor
万伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201711000224.2A priority Critical patent/CN107580209B/en
Publication of CN107580209A publication Critical patent/CN107580209A/en
Application granted granted Critical
Publication of CN107580209B publication Critical patent/CN107580209B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention provides a photographing and imaging method and a photographing and imaging device of a mobile terminal, wherein the mobile terminal is provided with a first camera and a second camera, and the method comprises the following steps: firstly, shooting a target face through a first camera and a second camera respectively to obtain a first image and a second image; then, generating a three-dimensional model of the target face according to the first image and the second image, and processing the first image or the second image to obtain a third image; and finally, generating a fourth image according to the three-dimensional model and the third image. According to the embodiment of the invention, the finally generated fourth image can be more three-dimensional, so that the aims of ensuring the attractiveness and the three-dimensional sense of the displayed photo are fulfilled, and the photographing effect is improved.

Description

Photographing imaging method and device of mobile terminal
Technical Field
The invention relates to the field of mobile terminals, in particular to a photographing imaging method and device of a mobile terminal.
Background
At present, with the development of mobile communication technology, mobile communication devices (such as mobile phones) are used more and more frequently, users often use mobile communication devices in various places and places in various occasions, meanwhile, with the continuous upgrade of the photographing quality of the mobile phone, the application software with the beauty function is in endless, and particularly, the mobile phone is used for photographing without carrying a camera when going out, which is convenient and can photograph photos with good beauty effect, therefore, the mobile phone also becomes a main tool for photographing for people, in addition, in order to improve the shooting quality, the current mobile phones with the first camera and the second camera are gradually popularized, wherein the first camera and the second camera are both front-facing cameras of the mobile terminal or both rear-facing cameras of the mobile terminal, namely, the mobile phone with front double cameras and/or rear double cameras which is popular in the market.
In a mobile phone with a first camera and a second camera, which is popular in the market, taking a mobile phone with a color camera and a black and white camera to form a double-shooting device as an example, when shooting, the color camera and the black and white camera can shoot images simultaneously. Because the black-and-white camera cancels the color separation filter, the light-entering quantity is higher than that of the color camera, and the image details are clearer. The two images are fused through software, and a picture which is clearer than that of a common color camera can be obtained. The enhancement effect of the definition plays a particularly significant role in night scene shooting.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the related art: although the mobile phone with the first camera and the second camera has certain advantages in the aspect of night scene shooting, the problem that the stereoscopic effect of final imaging is poor due to the fact that the images can be ground flat and processed in the process of beautifying the faces of the images cannot be solved, and user experience is greatly reduced.
Disclosure of Invention
The embodiment of the invention aims to provide a photographing imaging method and device of a mobile terminal, so as to solve the technical problems that the final imaging stereoscopic effect is poor and the user experience is greatly reduced in the prior art.
In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides a photographing and imaging method for a mobile terminal, where the mobile terminal has a first camera and a second camera, and the method includes: shooting a target face through the first camera and the second camera respectively to obtain a first image and a second image;
generating a three-dimensional model of the target face according to the first image and the second image, and processing the first image or the second image to obtain a third image;
and generating a fourth image according to the three-dimensional model and the third image.
In a second aspect, an embodiment of the present invention provides a photographing and imaging apparatus for a mobile terminal, where the mobile terminal has a first camera and a second camera, and the apparatus includes: the face image acquisition module is used for shooting a target face through the first camera and the second camera respectively to obtain a first image and a second image;
the stereoscopic model generating module is used for generating a stereoscopic model of the target face according to the first image and the second image;
the third image generation module is used for processing the first image or the second image to obtain a third image;
and the fourth image generation module is used for generating a fourth image according to the three-dimensional model and the third image.
In a third aspect, an embodiment of the present invention provides a mobile terminal, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the photo imaging method of the mobile terminal according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the photo imaging method of a mobile terminal according to the first aspect.
According to the photographing imaging method and device of the mobile terminal in the embodiment of the invention, firstly, a target face is photographed through a first camera and a second camera respectively to obtain a first image and a second image; then, generating a three-dimensional model of the target face according to the first image and the second image, and processing the first image or the second image to obtain a third image; and finally, generating a fourth image according to the three-dimensional model and the third image. According to the embodiment of the invention, the finally generated fourth image can be more three-dimensional, so that the aims of ensuring the attractiveness and the three-dimensional sense of the displayed photo are fulfilled, and the photographing effect is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a first flowchart of a photographing imaging method of a mobile terminal according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a second flowchart of a photographing imaging method of a mobile terminal according to an embodiment of the present invention;
fig. 3 is a third flowchart illustrating a photographing imaging method of a mobile terminal according to an embodiment of the present invention;
fig. 4 is a fourth flowchart illustrating a photographing imaging method of a mobile terminal according to an embodiment of the present invention;
fig. 5 is a schematic flowchart of a fifth flowchart of a photographing imaging method of a mobile terminal according to an embodiment of the present invention;
fig. 6 is a schematic diagram illustrating a first module composition of a photographing imaging apparatus of a mobile terminal according to an embodiment of the present invention;
fig. 7 is a schematic diagram illustrating a first module composition of a photographing and imaging device of a mobile terminal according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the technical solution in the embodiment of the present invention will be clearly and completely described below with reference to the drawings in the embodiment of the present invention, and it is obvious that the described embodiment is only a part of the embodiment of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a photographing imaging method and a photographing imaging device of a mobile terminal, wherein the mobile terminal is provided with a first camera and a second camera, a face is photographed based on the first camera and the second camera, and a processed image is subjected to three-dimensional transformation by virtue of the obtained first image and the second image, so that a finally generated fourth image is more three-dimensional, and therefore the purposes of ensuring the attractiveness of a displayed picture and the three-dimensional effect of the displayed picture are achieved, and the photographing effect is improved.
Fig. 1 is a schematic flowchart of a photographing imaging method of a mobile terminal according to an embodiment of the present invention, and as shown in fig. 1, the method at least includes the following steps:
s101, shooting a target face through a first camera and a second camera respectively to obtain a first image and a second image; when a user uses the mobile terminal to take a picture, two images of a certain target area are obtained simultaneously through the two rear cameras, or two images of a certain target area are obtained simultaneously through the two front cameras.
S102, generating a three-dimensional model of a target face according to a first image and a second image, and processing the first image or the second image to obtain a third image; in the process of generating the third image, it is preferable to process an image captured by the main camera (a camera whose captured image has high quality), for example, to perform a beauty process on the image captured by the main camera.
And S103, generating a fourth image according to the three-dimensional model and the third image.
In the embodiment of the invention, the processed image is three-dimensional by means of the first image and the second image obtained by shooting the human face based on the first camera and the second camera, so that the finally generated fourth image is more three-dimensional, thereby achieving the purposes of ensuring the attractiveness of the displayed picture and the three-dimensional effect of the displayed picture and further improving the shooting effect.
As the first camera and the second camera have different shooting focal lengths, for the process of generating the stereoscopic model, the depth information of each pixel point can be obtained according to the first image and the second image, and then the stereoscopic model of the target face can be obtained, as shown in fig. 2, in the step S102, the stereoscopic model of the target face is generated according to the first image and the second image, and the first image or the second image is processed to obtain a third image, which specifically includes:
s1021, determining depth information of the target face according to the first image and the second image;
s1022, according to the depth information of the target face, a three-dimensional model of the target face is created;
and S1023, processing the first image or the second image to obtain a third image.
It should be noted that the sequence between S1023 and S1021 and S1023 is not limited, that is, the process of generating the stereoscopic model of the target face and the process of generating the third image are not limited to the sequence executed by the two.
As shown in fig. 3, the step S103 of generating a fourth image according to the three-dimensional model and the third image specifically includes:
s1031, attaching the third image serving as a texture to the three-dimensional model to generate a first three-dimensional scene of the target human face;
s1032, setting a light supplementing light source in the first three-dimensional scene to obtain a second three-dimensional scene of the target face; the method comprises the steps that a light supplementing light source is arranged to supplement light for a first three-dimensional scene, so that a 3D shadow is enhanced on the basis of a three-dimensional face image, and the three-dimensional effect of a target face in a third image is enhanced;
and 1033, rendering the second stereo scene to generate a fourth image of the target face, where the second stereo scene is a three-dimensional stereo image and a two-dimensional photo of the corresponding target face needs to be obtained according to the shooting angle of the user.
Further, considering that the setting of the supplementary lighting light source will affect the strength of the stereoscopic effect of the finally obtained fourth image, in order to enhance the stereoscopic effect of the fourth image displayed to the user, based on this, the S1032 sets the supplementary lighting light source in the first stereoscopic scene, and obtains the second stereoscopic scene of the target face, specifically including:
step one, determining parameter information of a light supplement light source, wherein the parameter information comprises: at least one of position information, fill-in light brightness and fill-in light angle;
and step two, setting a light supplementing light source in the first three-dimensional scene according to the parameter information to obtain a second three-dimensional scene of the target face.
In addition, considering that the parallel light can obtain a natural effect similar to a daylight light source, in particular, it is preferable to select the parallel light source as a supplementary light source disposed in the first stereoscopic scene, so that the finally displayed fourth image has a more natural effect.
Next, a specific process of determining each parameter information of the light supplement light source is respectively given, and the determining the parameter information of the light supplement light source in the first step specifically includes:
(A) determining the position information of the light supplementing light source according to the shadow position of the designated organ in the third image; based on the shadow position of the designated organ (for example, nose) in the third image, the position information of the original light source when the first image or the second image is shot can be determined, and then the light supplement light source is arranged at the position corresponding to the original light source in the first three-dimensional scene.
(B) Determining the fill-in brightness of a fill-in light source according to the image brightness of the first image or the second image; preferably, the fill-in luminance of the fill-in light source is determined based on the luminance of the image of the face image captured by the main camera (the camera with the higher quality of the captured image), and taking the first image as the image of the target face captured by the main camera as an example, the method specifically includes:
performing HSL color space conversion on the first image to obtain brightness component values of all pixel points in the first image;
determining the average image brightness of the first image according to the brightness component values of the pixel points and a brightness calculation formula (1) of the following formula;
Figure BDA0001443182820000051
determining the supplementary lighting brightness of the supplementary lighting light source according to the average image brightness of the first image obtained by calculation and the following formula (2);
Figure BDA0001443182820000061
wherein, LumaveExpressing the average image brightness, n expressing the number of pixel points, delta being an empirical value (0.0001 can be taken for preventing the logarithm calculation result from approaching infinity), (i, j) expressing the coordinates of the pixel points, and Lum (i, j) expressing the lightness component values of the pixel points; lam (Lum)lightIndicating fill-in luminance, LumlightdefaultIndicating the base brightness of the fill-in light source.
(C) Determining a light supplementing angle of a light supplementing light source according to the image brightness of each pixel point on the shadow side of the specified organ; taking a nose of a designated organ as an example, the method specifically comprises the following steps:
determining the position of a nose and the central position of the nose in the first image based on a face recognition technology;
determining the average image brightness of the two sides of the center position (i.e. the image brightness of the two sides of the nose, which can be calculated according to the formula (1)) respectively, and determining the shadow side of the nose (i.e. the side with low average image brightness);
determining the brightness gradient corresponding to each pixel point according to the brightness component value of each pixel point on the shadow side of the nose and the following formula (3);
Figure BDA0001443182820000062
determining a light supplement angle of a light supplement light source according to the brightness gradient corresponding to each pixel point and the following formula (4);
Figure BDA0001443182820000063
wherein G isk(i, j) represents the brightness gradient of the kth pixel, Lum (i +1, j) represents the brightness gradient of the pixel with the coordinate (i +1, j), Lum (i, j) represents the brightness gradient of the pixel with the coordinate (i, j), Lum (i, j +1) represents the brightness gradient of the pixel with the coordinate (i, j +1), n represents the number of the pixels, GavgAnd showing the fill-in angle of the fill-in light source.
Further, in order to improve the third image fidelity finally displayed and achieve the effect of correcting the deformation of the five sense organs in consideration that a certain position in the stereoscopic model created based on the first image and the second image may have a certain deviation from the actual target face, as shown in fig. 4, after the step S102 generates the stereoscopic model of the target face based on the first image and the second image, the method further includes:
and S104, performing five-sense organ stereo adjustment on the generated stereo model to obtain an adjusted stereo model, and obtaining a fourth image of the target face based on the adjusted stereo model aiming at the step S103.
As shown in fig. 5, the step S104 of performing five-sense organ stereo adjustment on the generated stereo model to obtain an adjusted stereo model specifically includes:
s1041, determining five sense organs adjusting parameters according to the ratio of the depth information of the specified organ in the third image to the face width of the face head portrait;
and S1042, performing facial feature stereo adjustment on the stereo model based on the facial feature adjustment parameters to obtain an adjusted stereo model.
The above-mentioned five sense organs adjustment parameter may be a preset default value, or may be a parameter value corresponding to the target face determined in real time, and in the specific implementation, preferably, the parameter value corresponding to the target face determined in real time is used to perform the five sense organs stereo adjustment on the stereo model, so that the five sense organs stereo adjustment process in the stereo model is more targeted, and the third image finally displayed is further improved in reality.
Taking an appointed organ as a nose as an example, correspondingly, the process of determining the adjustment parameters of the nose in the target face specifically comprises the following steps:
determining the height of a nose according to the depth information of the target face; determining the face width of the head portrait of the human face based on the human face recognition technology;
calculating a stereo adjustment parameter value of the nose according to the following formula (5);
Figure BDA0001443182820000071
wherein s isnoseValues of the stereotactic parameter, H, representing the nosenoseIndicating the height of the nose, WfaceThe face width representing the head portrait of the human face.
And then, performing nose stereo adjustment on the stereo model based on the determined stereo adjustment parameter value of the nose.
The photographing imaging method of the mobile terminal in the embodiment of the invention comprises the steps of firstly, respectively photographing a target face through a first camera and a second camera to obtain a first image and a second image; then, generating a three-dimensional model of the target face according to the first image and the second image, and processing the first image or the second image to obtain a third image; and finally, generating a fourth image according to the three-dimensional model and the third image. According to the embodiment of the invention, the finally generated fourth image can be more three-dimensional, so that the aims of ensuring the attractiveness and the three-dimensional sense of the displayed photo are fulfilled, and the photographing effect is improved.
Based on the same technical concept, a photographing and imaging device of a mobile terminal is further provided in an embodiment of the present invention, fig. 6 is a schematic diagram of a first module of the photographing and imaging device of the mobile terminal according to the embodiment of the present invention, the device is configured to execute the photographing and imaging method of the mobile terminal described in fig. 1 to fig. 5, as shown in fig. 6, the mobile terminal has a first camera and a second camera, and the device includes:
the face image acquisition module 601 is configured to capture a target face through a first camera and a second camera, respectively, to obtain a first image and a second image;
a stereo model generating module 602, configured to generate a stereo model of the target face according to the first image and the second image;
a third image generating module 603, configured to process the first image or the second image to obtain a third image;
a fourth image generating module 604, configured to generate a fourth image according to the stereo model and the third image.
Optionally, the fourth image generating module 604 is specifically configured to:
attaching the third image as a texture to the three-dimensional model to generate a first three-dimensional scene of the target face;
setting a light supplementing light source in the first three-dimensional scene to obtain a second three-dimensional scene of the target face;
rendering the second three-dimensional scene to generate a fourth image of the target face.
Optionally, the fourth image generating module 604 is further specifically configured to:
determining parameter information of a fill-in light source, wherein the parameter information comprises: at least one of position information, fill-in light brightness and fill-in light angle;
and setting a light supplementing light source in the first three-dimensional scene according to the parameter information to obtain a second three-dimensional scene of the target face.
Optionally, the fourth image generating module 604 is further specifically configured to:
determining position information of a light supplementing light source according to the shadow position of the designated organ in the third image;
determining the fill-in luminance of a fill-in light source according to the image luminance of the first image or the second image;
and determining a light supplementing angle of a light supplementing light source according to the image brightness of each pixel point on the shadow side of the specified organ.
Optionally, the stereo model generating module 602 is configured to:
determining the depth information of the target face according to the first image and the second image;
and creating a three-dimensional model of the target face according to the depth information of the target face.
Optionally, as shown in fig. 7, the apparatus further includes:
a stereo model adjusting module 605, configured to perform five sense organs stereo adjustment on the generated stereo model after generating the stereo model of the target face according to the first image and the second image, so as to obtain an adjusted stereo model.
Optionally, the stereo model adjusting module 605 is specifically configured to:
determining five sense organs adjustment parameters according to the ratio of the depth information of the specified organ in the third image to the face width of the face head portrait;
and carrying out the stereo adjustment of the five sense organs on the stereo model based on the parameters for adjusting the five sense organs to obtain the adjusted stereo model.
The photographing imaging device of the mobile terminal in the embodiment of the invention firstly shoots a target face through the first camera and the second camera respectively to obtain a first image and a second image; then, generating a three-dimensional model of the target face according to the first image and the second image, and processing the first image or the second image to obtain a third image; and finally, generating a fourth image according to the three-dimensional model and the third image. According to the embodiment of the invention, the finally generated fourth image can be more three-dimensional, so that the aims of ensuring the attractiveness and the three-dimensional sense of the displayed photo are fulfilled, and the photographing effect is improved.
The photographing imaging device of the mobile terminal provided by the embodiment of the invention can realize each process in the embodiment corresponding to the photographing imaging method of the mobile terminal, and in order to avoid repetition, the details are not repeated here.
Based on the same technical concept, the embodiment of the present invention further provides a mobile terminal, where the device is configured to execute the photographing and imaging method of the mobile terminal, fig. 8 is a schematic diagram of a hardware structure of a mobile terminal implementing the embodiments of the present invention, and the mobile terminal 100 shown in fig. 8 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture illustrated in fig. 8 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 110 is configured to capture a target face through a first camera and a second camera, respectively, to obtain a first image and a second image;
generating a three-dimensional model of the target face according to the first image and the second image, and processing the first image or the second image to obtain a third image;
and generating a fourth image according to the three-dimensional model and the third image.
Optionally, the processor 110 is further configured to: generating a fourth image according to the three-dimensional model and the third image, wherein the fourth image comprises:
attaching the third image as a texture to the three-dimensional model to generate a first three-dimensional scene of the target face;
setting a light supplementing light source in the first three-dimensional scene to obtain a second three-dimensional scene of the target face;
rendering the second three-dimensional scene to generate a fourth image of the target face.
Optionally, the processor 110 is further configured to: setting a light supplement light source in the first stereo scene to obtain a second stereo scene of the target face, wherein the method comprises the following steps:
determining parameter information of a fill-in light source, wherein the parameter information comprises: at least one of position information, fill-in light brightness and fill-in light angle;
and setting a light supplementing light source in the first three-dimensional scene according to the parameter information to obtain a second three-dimensional scene of the target face.
Optionally, the processor 110 is further configured to: determining parameter information of a fill-in light source, including:
determining position information of a light supplementing light source according to the shadow position of the designated organ in the third image;
determining the fill-in luminance of a fill-in light source according to the image luminance of the first image or the second image;
and determining a light supplementing angle of a light supplementing light source according to the image brightness of each pixel point on the shadow side of the specified organ.
Optionally, the processor 110 is further configured to: generating a stereo model of the target face according to the first image and the second image, including:
determining the depth information of the target face according to the first image and the second image;
and creating a three-dimensional model of the target face according to the depth information of the target face.
Optionally, the processor 110 is further configured to: after generating the stereoscopic model of the target face according to the first image and the second image, the method further includes:
and carrying out five sense organs stereo adjustment on the generated stereo model to obtain an adjusted stereo model.
Optionally, the processor 110 is further configured to: performing five sense organs stereo adjustment on the generated stereo model to obtain an adjusted stereo model, wherein the method comprises the following steps:
determining five sense organs adjustment parameters according to the ratio of the depth information of the specified organ in the third image to the face width of the face head portrait;
and carrying out the stereo adjustment of the five sense organs on the stereo model based on the parameters for adjusting the five sense organs to obtain the adjusted stereo model.
In the mobile terminal 100 in the embodiment of the present invention, first, a target face is photographed by a first camera and a second camera, respectively, to obtain a first image and a second image; then, generating a three-dimensional model of the target face according to the first image and the second image, and processing the first image or the second image to obtain a third image; and finally, generating a fourth image according to the three-dimensional model and the third image. According to the embodiment of the invention, the finally generated fourth image can be more three-dimensional, so that the aims of ensuring the attractiveness and the three-dimensional sense of the displayed photo are fulfilled, and the photographing effect is improved.
It should be noted that, the mobile terminal 100 provided in the embodiment of the present invention can implement each process implemented by the mobile terminal in the method embodiments of fig. 1 to fig. 5, and for avoiding repetition, details are not described here again.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides wireless broadband internet access to the user through the network module 102, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or a backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 8, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 is an interface through which an external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
In addition, the mobile terminal 100 includes some functional modules that are not shown, and thus, the detailed description thereof is omitted.
Further, corresponding to the photographing imaging method of the mobile terminal provided in the foregoing embodiment, an embodiment of the present invention further provides a mobile terminal, including: a processor 110, a memory 109 and a computer program stored on the memory 109 and operable on the processor 110, the computer program, when executed by the processor 110, implementing the steps of the photographing imaging method of the mobile terminal described above.
Further, corresponding to the photographing and imaging method of the mobile terminal provided in the foregoing embodiment, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by the processor 110, the steps of the photographing and imaging method of the mobile terminal are implemented, and the same technical effects can be achieved, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
The computer-readable storage medium in the embodiment of the invention firstly shoots a target face through a first camera and a second camera respectively to obtain a first image and a second image; then, generating a three-dimensional model of the target face according to the first image and the second image, and processing the first image or the second image to obtain a third image; and finally, generating a fourth image according to the three-dimensional model and the third image. According to the embodiment of the invention, the finally generated fourth image can be more three-dimensional, so that the aims of ensuring the attractiveness and the three-dimensional sense of the displayed photo are fulfilled, and the photographing effect is improved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described in this disclosure may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described in this disclosure. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
It should also be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made herein without departing from the spirit and scope of the invention as defined in the appended claims. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (8)

1. A photographing and imaging method of a mobile terminal is characterized in that the mobile terminal is provided with a first camera and a second camera, and the method comprises the following steps:
shooting a target face through the first camera and the second camera respectively to obtain a first image and a second image;
generating a three-dimensional model of the target face according to the first image and the second image, and performing beauty treatment on the first image or the second image to obtain a third image;
generating a fourth image from the stereoscopic model and the third image,
wherein after generating the stereoscopic model of the target face from the first image and the second image, the method further comprises:
determining five sense organs adjustment parameters according to the ratio of the depth information of the specified organ in the third image to the face width of the face head portrait; performing facial organ stereo adjustment on the stereo model based on the facial organ adjustment parameters to obtain an adjusted stereo model;
generating a fourth image from the stereoscopic model and the third image, comprising:
attaching the third image as a texture to the adjusted three-dimensional model to generate a first three-dimensional scene of the target face, setting a light supplementing light source in the first three-dimensional scene to obtain a second three-dimensional scene of the target face, and rendering the second three-dimensional scene to generate a fourth image of the target face;
setting a light supplement light source in the first stereo scene to obtain a second stereo scene of the target face, wherein the method comprises the following steps: determining parameter information of a fill-in light source, wherein the parameter information comprises: and setting a supplementary lighting light source in the first three-dimensional scene according to the parameter information to obtain a second three-dimensional scene of the target face.
2. The method of claim 1, wherein determining parameter information of a fill-in light source comprises:
determining position information of a light supplementing light source according to the shadow position of the designated organ in the third image;
determining the fill-in luminance of a fill-in light source according to the image luminance of the first image or the second image;
and determining a light supplementing angle of a light supplementing light source according to the image brightness of each pixel point on the shadow side of the specified organ.
3. The method of claim 1, wherein generating a stereoscopic model of the target face from the first image and the second image comprises:
determining the depth information of the target face according to the first image and the second image;
and creating a three-dimensional model of the target face according to the depth information of the target face.
4. The utility model provides a mobile terminal's imaging device that shoots which characterized in that, mobile terminal has first camera and second camera, the device includes:
the face image acquisition module is used for shooting a target face through the first camera and the second camera respectively to obtain a first image and a second image;
the stereoscopic model generating module is used for generating a stereoscopic model of the target face according to the first image and the second image;
the third image generation module is used for performing beautifying processing on the first image or the second image to obtain a third image;
a fourth image generation module, configured to generate a fourth image according to the stereo model and the third image;
the three-dimensional model generation module is used for determining five sense organs adjustment parameters according to the ratio of the depth information of the specified organ in the third image to the face width of the face head portrait; performing facial organ stereo adjustment on the stereo model based on the facial organ adjustment parameters to obtain an adjusted stereo model;
the fourth image generation module is specifically configured to: attaching the third image as a texture to the adjusted three-dimensional model to generate a first three-dimensional scene of the target face, setting a light supplementing light source in the first three-dimensional scene to obtain a second three-dimensional scene of the target face, and rendering the second three-dimensional scene to generate a fourth image of the target face;
the fourth image generation module is further configured to: determining parameter information of a fill-in light source, wherein the parameter information comprises: and setting a supplementary lighting light source in the first three-dimensional scene according to the parameter information to obtain a second three-dimensional scene of the target face.
5. The apparatus of claim 4, wherein the fourth image generation module is further specifically configured to:
determining position information of a light supplementing light source according to the shadow position of the designated organ in the third image;
determining the fill-in luminance of a fill-in light source according to the image luminance of the first image or the second image;
and determining a light supplementing angle of a light supplementing light source according to the image brightness of each pixel point on the shadow side of the specified organ.
6. The apparatus of claim 4, wherein the stereo model generation module is configured to:
determining the depth information of the target face according to the first image and the second image;
and creating a three-dimensional model of the target face according to the depth information of the target face.
7. A mobile terminal, comprising: processor, memory and computer program stored on and executable on said memory, said computer program when executed by said processor implementing the steps of the photo imaging method of a mobile terminal according to any of claims 1 to 3.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the photo imaging method of a mobile terminal according to any one of claims 1 to 3.
CN201711000224.2A 2017-10-24 2017-10-24 Photographing imaging method and device of mobile terminal Active CN107580209B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711000224.2A CN107580209B (en) 2017-10-24 2017-10-24 Photographing imaging method and device of mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711000224.2A CN107580209B (en) 2017-10-24 2017-10-24 Photographing imaging method and device of mobile terminal

Publications (2)

Publication Number Publication Date
CN107580209A CN107580209A (en) 2018-01-12
CN107580209B true CN107580209B (en) 2020-04-21

Family

ID=61038407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711000224.2A Active CN107580209B (en) 2017-10-24 2017-10-24 Photographing imaging method and device of mobile terminal

Country Status (1)

Country Link
CN (1) CN107580209B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573480B (en) * 2018-04-20 2020-02-11 太平洋未来科技(深圳)有限公司 Ambient light compensation method and device based on image processing and electronic equipment
CN108921815A (en) * 2018-05-16 2018-11-30 Oppo广东移动通信有限公司 It takes pictures exchange method, device, storage medium and terminal device
CN108846807B (en) * 2018-05-23 2021-03-02 Oppo广东移动通信有限公司 Light effect processing method and device, terminal and computer-readable storage medium
CN108833791B (en) * 2018-08-17 2021-08-06 维沃移动通信有限公司 Shooting method and device
CN111698494B (en) 2018-08-22 2022-10-28 Oppo广东移动通信有限公司 Electronic device
CN109447942B (en) * 2018-09-14 2024-04-23 平安科技(深圳)有限公司 Image ambiguity determining method, apparatus, computer device and storage medium
CN109785226B (en) * 2018-12-28 2023-11-17 维沃移动通信有限公司 Image processing method and device and terminal equipment
CN111491154A (en) * 2019-01-25 2020-08-04 比特安尼梅特有限公司 Detection and ranging based on one or more monoscopic frames
CN111107281B (en) * 2019-12-30 2022-04-12 维沃移动通信有限公司 Image processing method, image processing apparatus, electronic device, and medium
CN111556255B (en) * 2020-04-30 2021-10-01 华为技术有限公司 Image generation method and device
CN115484386B (en) * 2021-06-16 2023-10-31 荣耀终端有限公司 Video shooting method and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103853475A (en) * 2012-12-03 2014-06-11 联想(北京)有限公司 Method and device for converting two-dimensional interface into three-dimensional interface
CN104123748A (en) * 2014-07-18 2014-10-29 无锡梵天信息技术股份有限公司 Screen space point light source based method for achieving real-time dynamic shadows
CN105741343A (en) * 2016-01-28 2016-07-06 联想(北京)有限公司 Information processing method and electronic equipment
WO2017034447A1 (en) * 2015-08-26 2017-03-02 Telefonaktiebolaget Lm Ericsson (Publ) Image capturing device and method thereof
CN106791775A (en) * 2016-11-15 2017-05-31 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN107194963A (en) * 2017-04-28 2017-09-22 努比亚技术有限公司 A kind of dual camera image processing method and terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931288A (en) * 2016-04-12 2016-09-07 广州凡拓数字创意科技股份有限公司 Construction method and system of digital exhibition hall

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103853475A (en) * 2012-12-03 2014-06-11 联想(北京)有限公司 Method and device for converting two-dimensional interface into three-dimensional interface
CN104123748A (en) * 2014-07-18 2014-10-29 无锡梵天信息技术股份有限公司 Screen space point light source based method for achieving real-time dynamic shadows
WO2017034447A1 (en) * 2015-08-26 2017-03-02 Telefonaktiebolaget Lm Ericsson (Publ) Image capturing device and method thereof
CN105741343A (en) * 2016-01-28 2016-07-06 联想(北京)有限公司 Information processing method and electronic equipment
CN106791775A (en) * 2016-11-15 2017-05-31 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN107194963A (en) * 2017-04-28 2017-09-22 努比亚技术有限公司 A kind of dual camera image processing method and terminal

Also Published As

Publication number Publication date
CN107580209A (en) 2018-01-12

Similar Documents

Publication Publication Date Title
CN107580209B (en) Photographing imaging method and device of mobile terminal
CN110502954B (en) Video analysis method and device
US11436779B2 (en) Image processing method, electronic device, and storage medium
CN109427083B (en) Method, device, terminal and storage medium for displaying three-dimensional virtual image
CN108594997B (en) Gesture skeleton construction method, device, equipment and storage medium
CN107172364B (en) Image exposure compensation method and device and computer readable storage medium
CN109688322B (en) Method and device for generating high dynamic range image and mobile terminal
CN108038825B (en) Image processing method and mobile terminal
CN108055402B (en) Shooting method and mobile terminal
CN109361867B (en) Filter processing method and mobile terminal
CN110930335B (en) Image processing method and electronic equipment
CN107730460B (en) Image processing method and mobile terminal
CN108513067B (en) Shooting control method and mobile terminal
CN107644396B (en) Lip color adjusting method and device
CN112287852B (en) Face image processing method, face image display method, face image processing device and face image display equipment
CN107248137B (en) Method for realizing image processing and mobile terminal
CN111028144B (en) Video face changing method and device and storage medium
CN108449541B (en) Panoramic image shooting method and mobile terminal
CN108280817B (en) Image processing method and mobile terminal
CN110149517B (en) Video processing method and device, electronic equipment and computer storage medium
JP7467667B2 (en) Detection result output method, electronic device and medium
CN109104578B (en) Image processing method and mobile terminal
CN110807769B (en) Image display control method and device
CN109727212B (en) Image processing method and mobile terminal
CN109639981B (en) Image shooting method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant