CN111327814A - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN111327814A
CN111327814A CN201811544123.6A CN201811544123A CN111327814A CN 111327814 A CN111327814 A CN 111327814A CN 201811544123 A CN201811544123 A CN 201811544123A CN 111327814 A CN111327814 A CN 111327814A
Authority
CN
China
Prior art keywords
image
face
region
camera
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811544123.6A
Other languages
Chinese (zh)
Inventor
陈拓
吴磊
孙雨生
李阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201811544123.6A priority Critical patent/CN111327814A/en
Priority to PCT/CN2019/122837 priority patent/WO2020125410A1/en
Publication of CN111327814A publication Critical patent/CN111327814A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • G06T3/04
    • G06T5/90
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

An image processing method and electronic equipment relate to the technical field of communication, can visually improve the stereoscopic impression of a human face by changing the brightness of a specific area in the human face in an image, achieve the effect of face thinning, and improve user experience, and the method comprises the following steps: detecting a first operation, opening a camera by the electronic equipment, and displaying a photographing interface; detecting a second operation and starting a first function; displaying a first image in a view finder, and determining a first area and a second area in the face of a target photographic subject; and detecting a third operation, and photographing to generate a second image, wherein the brightness value of the first region of the face in the second image is larger than that of the first region of the face in the original image, the brightness value of the second region of the face in the second image is smaller than that of the second region of the face in the original image, and the brightness value of the third region of the face in the second image is kept unchanged from that of the third region of the face in the original image.

Description

Image processing method and electronic equipment
Technical Field
The present application relates to the field of communications technologies, and in particular, to an image processing method and an electronic device.
Background
With the development of electronic technology, the shooting function of a camera on electronic equipment is more and more abundant. The requirements of users on the intelligent portrait beautifying function are higher and higher. Among them, the face slimming function is a function that the user is extremely concerned with.
At present, the electronic equipment can determine the contour line below the face by identifying the characteristic points of the face in the image, and then inwards move the pixel position of the contour line below the face in the image, so that the contour below the face in the image is reduced, and the face-thinning effect is achieved. For example: an image 1 shown as (1) in fig. 1 is an original image which has not been subjected to face thinning processing, and the contour lines of the human faces in the image 1 are modified by adopting the prior art, so that an image 2 shown as (2) in fig. 1 after face thinning processing is obtained.
However, the face thinning effect achieved by modifying the contour lines of the human face is limited. This is because if the face contour is excessively modified, the face curve is easily made unnatural. Moreover, the modified face shape has a larger difference from the real person, so that the image is not real enough and the user experience is not good.
Disclosure of Invention
According to the image processing method and the electronic device, the stereoscopic impression of the face can be visually improved by changing the brightness of the specific area in the face in the image, the face thinning effect is achieved, and the user experience is improved.
In a first aspect, an embodiment of the present application provides an image processing method, including:
detecting a first operation input by a user, responding to the first operation, opening a camera by the electronic equipment, and displaying a photographing interface; the photographing interface comprises a viewing frame; detecting a second operation input by the user, and responding to the second operation, starting the first function by the electronic equipment; displaying a first image in a view frame according to an image acquired by a camera, wherein the first image comprises a face of a target shooting object; determining a first area and a second area in the face of a target shooting object; detecting a third operation input by a user, and in response to the third operation, photographing by the electronic equipment according to an image acquired by the camera to generate a second image, wherein the second image comprises a face of a target photographic object; the second image is an image generated by processing an original image according to the first function by the electronic device, the original image is an image generated by photographing a face of a target photographic object when the first function is not turned on by the electronic device, wherein a luminance value of a first region of the face in the second image is larger than a luminance value of the first region of the face in the original image, a luminance value of a second region of the face in the second image is smaller than a luminance value of the second region of the face in the original image, and a luminance value of a third region of the face in the second image is kept unchanged relative to a luminance value of the third region of the face in the original image.
In the embodiment of the application, the principle that people make up the face in daily life is applied, the brightness of the skin color of the specific areas is adjusted, the stereoscopic impression of the face is visually improved, and the face thinning effect is achieved. And because the face thinning processing does not change the face shape of the face in the original image, the real contour line of the face is reserved, so that the shot image is more real and natural.
In one possible implementation, the first region is at least one of a nasal bridge region and a forehead region, and the second region is at least one of a nasal wing region and an outer contour region.
The first area is an area which needs to be brightened in the face of the target shooting object, and the second area is an area which needs to be brightened in the face of the target shooting object. Therefore, a scheme for specifically adjusting the brightness of the specific area is provided.
In one possible implementation, the generating, by the electronic device, the second image after processing the original image according to the first function includes: and modifying the contour line of the face in the original image.
The face thinning method for adjusting the brightness of the skin color in the specific area provided by the embodiment of the application can be combined with a face thinning method for modifying the contour line of the face in the image to process the face in the image together, so that a more face thinning effect is achieved. Therefore, the situation that in the prior art, the contour line of the face in the image is excessively modified to achieve the effect of thinning the face, so that the contour line of the face is unnatural and has great difference with a real person is avoided.
In one possible implementation, during the process of displaying the first image in the view finder according to the image acquired by the camera, the contour line of the face in the first image is modified.
During the preview, the effect of thin face is shown at the frame of finding a view, can let the user see thin face's effect, perhaps indicates that this function of user is thin face function, is favorable to promoting user experience.
In a possible implementation, the method further includes: in the process that a third operation input by the user is detected, and in response to the third operation, the electronic device generates a second image by taking a picture according to the image acquired by the camera, the image processing method further includes:
if the image acquired by the camera comprises the faces of at least two shooting objects, the electronic equipment automatically determines the face of the target shooting object, or determines the face of the target shooting object according to a fourth operation input by the user.
Therefore, the embodiment of the application provides a method for determining the face of a target shooting object when an acquired image contains the faces of a plurality of shooting objects.
In one possible implementation manner, the automatically determining, by an electronic device, a face of a target photographic object includes:
the electronic equipment determines the face of the target shooting object according to the area or the position of each face in the faces of the at least two shooting objects.
The electronic equipment automatically determines the face of the target shooting object, so that the operation of a user is reduced, and the user experience is favorably improved.
In a second aspect, an embodiment of the present application provides an image processing method, including:
detecting a first operation input by a user, responding to the first operation, opening a camera by the electronic equipment, and displaying a photographing interface; the photographing interface comprises a viewing frame; detecting a second operation input by the user, and responding to the second operation, starting the first function by the electronic equipment; displaying a first image in a view frame according to an image acquired by a camera, wherein the first image comprises a face of a target shooting object; determining a first area and a second area in the face of a target shooting object; detecting a third operation input by a user, and in response to the third operation, photographing by the electronic equipment according to an image acquired by the camera to generate a second image, wherein the second image comprises a face of a target photographic object; the second image is an image generated by the electronic device after processing the image acquired by the camera according to the first function, wherein the brightness value of the first region of the face in the second image is increased, the brightness value of the second region of the face in the second image is decreased, and the brightness value of the third region of the face in the second image is kept unchanged.
In the embodiment of the application, the principle that people make up the face in daily life is applied, the brightness of the skin color of the specific areas is adjusted, the stereoscopic impression of the face is visually improved, and the face thinning effect is achieved. And because the face thinning processing does not change the face shape of the face in the original image, the real contour line of the face is reserved, so that the shot image is more real and natural.
In one possible implementation, the first region is at least one of a nasal bridge region and a forehead region, and the second region is at least one of a nasal wing region and an outer contour region.
In a possible implementation manner, in the process that the electronic device generates the second image after processing the image acquired by the camera according to the first function, the contour line of the face in the second image is modified.
In one possible implementation, during the process of displaying the first image in the view finder according to the image acquired by the camera, the contour line of the face in the first image is modified.
In a possible implementation, the method further includes:
in the process that a third operation input by the user is detected, and in response to the third operation, the electronic device generates a second image by taking a picture according to the image acquired by the camera, the image processing method further includes:
if the image acquired by the camera comprises the faces of at least two shooting objects, the electronic equipment automatically determines the face of the target shooting object, or determines the face of the target shooting object according to a fourth operation input by the user.
In one possible implementation manner, the automatically determining, by an electronic device, a face of a target photographic object includes:
the electronic equipment determines the face of the target shooting object according to the area or the position of each face in the faces of the at least two shooting objects.
In a third aspect, the present application provides an electronic device, comprising: at least one camera, a processor, a memory, and a touch screen, the at least one camera, the memory, and the touch screen coupled to the processor, the memory for storing computer program code, the computer program code comprising computer instructions that, when read from the memory by the processor, cause the electronic device to perform the following:
detecting a first operation input by a user, responding to the first operation, opening a camera by the electronic equipment, and displaying a photographing interface on the touch screen; the photographing interface comprises a viewing frame; detecting a second operation input by the user, and responding to the second operation, starting the first function by the electronic equipment; displaying a first image in a view frame according to an image acquired by at least one camera, wherein the first image comprises a face of a target shooting object; determining a first area and a second area in the face of a target shooting object; detecting a third operation input by a user, and in response to the third operation, photographing by the electronic equipment according to an image acquired by at least one camera to generate a second image, wherein the second image comprises a face of a target photographic object; the second image is an image generated by processing an original image according to the first function by the electronic device, the original image is an image generated by photographing a face of a target photographic object when the first function is not turned on by the electronic device, wherein a luminance value of a first region of the face in the second image is larger than a luminance value of the first region of the face in the original image, a luminance value of a second region of the face in the second image is smaller than a luminance value of the second region of the face in the original image, and a luminance value of a third region of the face in the second image is kept unchanged relative to a luminance value of the third region of the face in the original image.
In the embodiment of the application, the principle that people make up the face in daily life is applied, the brightness of the skin color of the specific areas is adjusted, the stereoscopic impression of the face is visually improved, and the face thinning effect is achieved. And because the face thinning processing does not change the face shape of the face in the original image, the real contour line of the face is reserved, so that the shot image is more real and natural.
In one possible implementation, the first region is at least one of a nasal bridge region and a forehead region, and the second region is at least one of a nasal wing region and an outer contour region.
In one possible implementation, the generating, by the electronic device, the second image after processing the original image according to the first function includes: and modifying the contour line of the face in the original image.
In one possible implementation manner, in the process that the electronic device displays the first image in the view finder according to the image acquired by the at least one camera, the contour line of the face in the first image is modified.
In a possible implementation manner, when a third operation input by a user is detected, in response to the third operation, the electronic device performs photographing according to an image acquired by the camera to generate a second image, and if the image acquired by the camera includes faces of at least two photographic objects, the electronic device automatically determines the face of the target photographic object, or determines the face of the target photographic object according to a fourth operation input by the user.
In one possible implementation manner, the automatically determining, by an electronic device, a face of a target photographic object includes: the electronic equipment determines the face of the target shooting object according to the area or the position of each face in the faces of the at least two shooting objects.
In a fourth aspect, the present application provides an electronic device, comprising: at least one camera, a processor, a memory, and a touch screen, the memory and the touch screen coupled to the processor, the memory for storing computer program code, the computer program code comprising computer instructions that, when read from the memory by the processor, cause the electronic device to perform the following:
detecting a first operation input by a user, responding to the first operation, opening a camera by the electronic equipment, and displaying a photographing interface on the touch screen; the photographing interface comprises a viewing frame; detecting a second operation input by the user, and responding to the second operation, starting the first function by the electronic equipment; displaying a first image in a view frame according to an image acquired by at least one camera, wherein the first image comprises a face of a target shooting object; determining a first area and a second area in the face of a target shooting object; detecting a third operation input by a user, and in response to the third operation, photographing by the electronic equipment according to an image acquired by at least one camera to generate a second image, wherein the second image comprises a face of a target photographic object; the second image is generated by the electronic device after processing the image acquired by the at least one camera according to the first function, wherein the brightness value of the first region of the face in the second image is increased, the brightness value of the second region of the face in the second image is decreased, and the brightness value of the third region of the face in the second image is kept unchanged.
In one possible implementation, the first region is at least one of a nasal bridge region and a forehead region, and the second region is at least one of a nasal wing region and an outer contour region.
In a possible implementation manner, in a process that the electronic device generates a second image after processing an image acquired by at least one camera according to the first function, an outline of a face in the second image is modified.
In one possible implementation manner, in the process that the electronic device displays the first image in the view finder according to the image acquired by the at least one camera, the contour line of the face in the first image is modified.
In a possible implementation manner, when a third operation input by a user is detected, in response to the third operation, in the process that the electronic device generates a second image by photographing according to an image acquired by at least one camera, if the image acquired by at least one camera includes faces of at least two photographic objects, the electronic device automatically determines the face of the target photographic object, or determines the face of the target photographic object according to a fourth operation input by the user.
In one possible implementation manner, the automatically determining, by an electronic device, a face of a target photographic object includes: the electronic equipment determines the face of the target shooting object according to the area or the position of each face in the faces of the at least two shooting objects.
A fifth aspect, a computer storage medium, comprising computer instructions which, when run on a terminal, cause the terminal to perform the method of image processing as described in the first aspect and any one of its possible implementations.
A sixth aspect is a computer storage medium comprising computer instructions which, when run on a terminal, cause the terminal to perform the method of image processing as described in the second aspect and any one of its possible implementations.
A seventh aspect, a computer program product, which, when run on a computer, causes the computer to perform the method of image processing as described in the first aspect and any one of its possible implementations.
An eighth aspect, a computer program product, which, when run on a computer, causes the computer to perform the method of image processing as described in the second aspect and any one of its possible implementations.
Drawings
Fig. 1 is a schematic diagram of a face slimming method in the prior art;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a software architecture of an electronic device according to an embodiment of the present application;
FIG. 4 is a schematic view of a user interface of some of the electronic devices provided by embodiments of the present application;
FIG. 5 is a schematic view of a user interface of further electronic devices according to an embodiment of the present application;
FIG. 6 is a schematic view of a user interface of further electronic devices according to an embodiment of the present application;
FIG. 7 is a schematic view of a user interface of further electronic devices according to an embodiment of the present application;
FIG. 8 is a schematic view of a user interface of further electronic devices according to an embodiment of the present application;
FIG. 9 is a schematic view of a user interface of further electronic devices according to an embodiment of the present application;
FIG. 10 is a schematic view of a user interface of further electronic devices according to an embodiment of the present application;
FIG. 11 is a schematic view of a user interface of further electronic devices according to an embodiment of the present application;
fig. 12 is a schematic user interface diagram of further electronic devices according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the embodiments of the present application, "a plurality" means two or more unless otherwise specified.
In some embodiments of the present application, an image processing method is provided, which can be applied to an electronic device, and when a person image (taking a photo or recording a video) is taken, RGB image data and depth image data of a subject can be acquired, so as to build a 3D model of a human face. Determining one or more specific regions in the face according to the 3D model of the face, for example: nasal bridge region, alar region, outer contour region, etc. The principle that people make up the face in daily life can be applied, the brightness degree of the skin color of the specific areas is adjusted, the stereoscopic impression of the face is visually improved, and the face thinning effect is achieved. And because the face thinning processing does not change the face shape of the face in the original image, the real contour line of the face is reserved, so that the shot image is more real and natural.
The brightness of the skin color is understood to be the brightness of the image. Generally, the color/hue of an image is commonly represented by luminance, which is the property of a color excluding luminance, and chrominance, which reflects the hue and saturation of the color, and luminance, which refers to the brightness of the color. Therefore, the adjustment of the brightness of the skin color in a specific area includes processing the brightness and/or chromaticity.
In other embodiments of the present application, the face thinning method for adjusting the brightness of the skin color in the specific area may be combined with a face thinning method for modifying the contour line of the face in the image to process the face in the image together, so as to achieve a more face thinning effect. Therefore, the situation that in the prior art, the contour line of the face in the image is excessively modified to achieve the effect of thinning the face, so that the contour line of the face is unnatural and has great difference with a real person is avoided.
For example, the electronic device in the present application may be a mobile phone, a tablet computer, a Personal Computer (PC), a Personal Digital Assistant (PDA), a netbook, a wearable electronic device, an Augmented Reality (AR) device, a Virtual Reality (VR) device, an in-vehicle device, a smart car, a robot, and the like, and the present application does not particularly limit the specific form of the electronic device.
Fig. 2 shows a schematic structural diagram of the electronic device 100.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K via an I2C interface, such that the processor 110 and the touch sensor 180K communicate via an I2C bus interface to implement the touch functionality of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a display screen serial interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou satellite navigation system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
In the embodiment of the present application, in the present application, the mobile phone includes a first camera and a second camera, and the first camera and the second camera are located on the same side of the mobile phone, the front side or the back side of the mobile phone. That is, the first camera and the second camera are both front-facing cameras, or the first camera and the second camera are both rear-facing cameras. Wherein the first camera(s) and the second camera are used to acquire an RGB image and a Depth (Depth) image. The RGB image contains color information captured by the camera, and the pixel values may be R (red), G (green), and B (blue) values. A Depth image is similar to a grayscale image except that each pixel value thereof is the actual distance of the camera from the object. Usually, the RGB image and the Depth image are registered, so that there is a one-to-one correspondence between the pixel points. For example: the mobile phone comprises two or more front cameras, wherein the first camera can be a COMOS (complementary metal oxide semiconductor) camera in the front camera, and the second camera can be a structured light device or a time of flight (TOF) camera in the front camera. Or the first camera and the second camera are both COMOS cameras.
For another example: the mobile phone comprises two or more rear cameras, wherein the first camera can be a COMOS camera in the rear camera, and the second camera can be any one of a structured light device or a TOF device in the rear camera. Or the first camera and the second camera are both COMOS cameras.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
In the embodiment of the application, the NPU can recognize the human face feature points of the RGB image, then input the recognized human face feature points into a standard 3D human face model, and then establish a 3D human face model corresponding to the target human face according to the depth image. Further, a specific area of the face can be determined according to the established 3D face model corresponding to the target face, and the skin color of the specific area of the face is shaded and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person. The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on. The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180C.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present invention uses an Android system with a layered architecture as an example to exemplarily illustrate a software structure of the electronic device 100.
Fig. 3 is a block diagram of the software configuration of the electronic apparatus 100 according to the embodiment of the present invention.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 3, the application layer may include application packages such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 3, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes exemplary workflow of the software and hardware of the electronic device 100 in connection with capturing a photo scene.
When the touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a control of a camera application icon as an example, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera drive by calling a kernel layer, and captures a still image or a video through the camera 193.
The technical solutions in the following embodiments can be implemented in the electronic device 100 having the above hardware structure and software architecture. For convenience of description, in the following embodiments, an electronic device is taken as a mobile phone, and a specific implementation of an image processing method provided in the embodiments of the present application is described in detail with reference to the drawings.
As shown in (1) in fig. 4, which is a schematic diagram of a desktop 401 of a mobile phone, the desktop 401 displays a status bar, time and weather controls, application icons (e.g., an icon 402 of a camera application), and a dock bar. The status bar may include the name of the operator (e.g., china mobile), time, WiFi icon, signal strength, current remaining power, and the like.
When the mobile phone detects that the user clicks the icon 402 of the camera application in the desktop 401, the mobile phone starts the camera function and enters a main interface of the camera application, which may be called a photographing interface of the mobile phone. It should be noted that the user may also enter the photo interface of the mobile phone through other manners, for example, by clicking a shortcut icon corresponding to the corresponding application in the screen locking interface, which is not limited in the embodiment of the present application.
As shown in (2) of fig. 4, a schematic diagram of a mobile phone photo interface 403 is shown. In the photographing interface 403, a finder frame 404, a control 405 for instructing to enter a photographing mode, a control 406 for instructing to enter a recording mode, a control 407 for instructing to enter a portrait mode, a camera control 408 for instructing to switch before and after, a photographing control 409, a photo thumbnail 410, a setting control 411, a High-Dynamic Range (HDR) control 412, a presentation control 413, and the like are displayed.
At this time, the view finder 404 can preview the image captured by the mobile phone camera in real time. In general, after entering the photographing interface 403, the mobile phone enters the photographing mode by default (or the mode in which the mobile phone was last exited from the camera application). When detecting that the user clicks the control 406 indicating to enter the video recording mode, the mobile phone enters the video recording mode. Or, when a gesture that the user slides to the left on the viewfinder 404 is detected, the mobile phone enters a video recording mode. Still alternatively, after detecting that the user clicks the control 407 indicating to enter the portrait mode, or after detecting that the user slides right on the viewfinder frame 404, the mobile phone enters the portrait mode, and displays the portrait mode photographing interface 414 shown in (3) in fig. 4.
The "portrait mode" refers to a photographing function that can bring about background blurring similar to a single lens reflex when a front camera or a rear camera of a mobile phone has two (or more) cameras. Under the people's image mode is taken a picture, it can make the focus main part that the camera was taken more outstanding, and the background image can be blurred, brings more beautiful effect. This is because one of the high-pixel cameras can be responsible for photographing the person/object subject, and the other camera can be responsible for blurring and blurring photographing, thereby combining a photograph or video with blurred background.
Generally, a mobile phone has a front camera and a rear camera. The front camera is positioned on the front side of the mobile phone, and when the front camera is used, the camera acquires images positioned on the front side of the mobile phone, so that the mobile phone can be used for self-shooting, face unlocking and the like. The rear camera is positioned on the back of the mobile phone, and when the rear camera is used, the camera acquires images positioned on the back of the mobile phone, so that the mobile phone can be used for shooting people, food, scenery and the like. When the mobile phone uses the front camera to collect images, when it is detected that the user clicks the front and rear camera control 408 for indicating switching, the mobile phone uses the rear camera to collect images. When the mobile phone uses the rear camera to collect images, when it is detected that the user clicks the front and rear camera control 408 for indicating switching, the mobile phone uses the front camera to collect images.
In the photographing mode or the portrait mode, when the mobile phone detects that the user clicks the photographing control 409, the mobile phone executes the photographing operation; in the video recording mode, after the mobile phone detects that the user clicks the shooting control 409, the mobile phone executes the video shooting operation.
The functions of the other controls will be described below while describing their functions, and will not be described here.
When the mobile phone enters the portrait mode, a portrait mode photographing interface 414 shown in (3) in fig. 4 is displayed. On the portrait mode photographing interface 414, a light spot function control 415, a light effect function control 416, and a skin makeup function control 417 are displayed.
The light spot function is to adopt light spots of different shapes to blur the background (non-portrait part) of the image, so that the portrait part in the image can be more highlighted, and the picture is more beautiful. When it is detected that the user clicks on the spot function control 415, the handset displays options for different spot effects, for example: circular, heart-shaped, straight, etc., the user can select different spot effects. After the light spot function is turned on, the mobile phone performs light spot effect processing on the image acquired by the viewfinder 404 in real time, and displays the processed image (image with light spot effect) in the viewfinder 404 in real time.
The lighting effect function is to add vivid light and shadow effects to the image, such as natural light, studio light, contour light, stage light, and the like. When it is detected that the user clicks the light spot function control 416, the mobile phone displays options of different light and shadow effects, and the user can select different light and shadow effects. After the light effect function is turned on, the mobile phone performs light and shadow effect processing on the image acquired by the viewfinder 404 in real time, and displays the processed image (image with light and shadow effect) in the viewfinder 404 in real time.
The skin beautifying function is used for processing the face part in the image and comprises the following steps: a smoothing function, a face-thinning function, and a skin-tone function. The smoothing function can smooth the skin of the human face in the image. The skin color function can adjust the skin color of the face in the image. The face thinning function can be used for face thinning processing on the face in the image.
It should be noted that the light spot function, the light effect function, and the skin makeup function may be used in a superimposed manner, that is, a user may simultaneously turn on a plurality of light spot functions, light effect functions, and skin makeup functions. Then, when the mobile phone processes the image, the processed image can have the effects of the plurality of functions. For example: if the user opens the light spot function and the skin beautifying function, the mobile phone can perform blurring processing on the background in the acquired image and perform skin beautifying processing (including any one or more of smoothness, face thinning and skin color) on the person in the acquired image. The processed image has the effect of blurring the background and the effect of processing the portrait. Another example is: if the user opens the light spot function, the light effect function and the skin beautifying function, then the hand can perform light spot processing, light effect processing and skin beautifying processing on the collected image. The human face in the processed image has the effect of blurring the background, the effect of light and shadow and the effect of human image processing.
When it is detected that the user clicks on the skin makeup functionality control 417, the handset displays a sub-menu of skin makeup functionality, i.e., displays an interface 418 as shown in (4) in fig. 4, which interface 418 may include a smooth functionality control 419, a face-thinning functionality control 420, a skin tone functionality control 421, and an intensity (or level) control 422. In some examples, when the cell phone displays the sub-menu of the skin-beautifying function, a control for closing the sub-menu may also be displayed in the interface, so that the user can quickly close the sub-menu of the skin-beautifying function, or a returned control is displayed, so that the user can quickly return to the interface of the portrait mode, and the light spot function and the light effect function can be set.
The mobile phone may default to not turn on any function, or default to turn on one or more functions, or default to turn on a function that the user selects to turn on when the user exits the camera application last time. For example: when the mobile phone just enters the interface 418, the smoothing function, the face thinning function and the skin color function are not started, and the user needs to manually start the required functions. Another example is: when the mobile phone just enters the interface 418, the face slimming function is turned on by default, and the user can manually turn off the face slimming function or turn on other functions. Another example is: when the mobile phone puts forward the camera application for the last time, the smoothing function and the face thinning function are started. When the camera application is turned on again and the interface 418 is entered, the smooth function and the face-thinning function may be turned on by default. The embodiment of the present application is not particularly limited to this.
In some examples, the user may set the intensity of the face in the processed image by sliding a slider in intensity (or level) control 422, such as: sliding to the left, reducing the intensity or level of the processed image (smoothing, face thinning, skin tone processing, etc.). Sliding to the right increases the processed image level. In other examples, the user may also set the processed image intensity by setting or inputting a value (e.g., 1 to 10, or 1 to 100, etc.). For example: the larger the value, the greater the intensity of the processed image. The smaller the value, the smaller the intensity of the processed image, and when the value is zero, no image processing is performed. For example: in the interface 423 shown in (5) in fig. 4, the face-thinning level is zero, that is, the face in the acquired image is not subjected to face-thinning processing, that is, the image in the finder frame of the interface 423 is an image without face-thinning processing. When the fact that the face-thinning level of the user is set to be non-zero (for example, 5) is detected, the mobile phone conducts face-thinning processing on the face in the collected image, and the image in the scene frame is taken in an interface 425 shown in (6) in fig. 4. The embodiment of the present application does not limit the specific form of the control for setting the intensity of the processed image.
The mobile phone can also cancel the display intensity control 422 in some scenes, so as to prevent the intensity control 422 from blocking partial images in the viewfinder. The embodiment of the present application is not particularly limited to this.
For example: if the user does not operate the mobile phone within a preset time (e.g., 5 seconds), the user may adjust the shooting position, and the like, the mobile phone may cancel the display intensity control 422 to prevent the image in the viewfinder frame from being blocked. At this stage the handset still saves the settings of intensity control 422 before cancellation. If the user needs to reset the intensity, the user can click a corresponding function control, for example, the face-thinning function control 420, at this time, the mobile phone displays the intensity control 422, and the user can set the face-thinning intensity by sliding the intensity control 422.
Another example is: if it is detected that the user clicks the camera control 408 used for indicating switching before and after, the mobile phone switches the currently used camera, and at this time, the mobile phone may also cancel the display intensity control 422 so as to avoid blocking the image in the view finder. Similarly, the user may re-display the intensity control 422 on the cell phone by re-clicking the corresponding functionality control, such as the face-thinning functionality control 420.
In some embodiments of the present application, after the user clicks the face-thinning function control 420, the intensity control 422 is slid again, and the intensity or level of the face-thinning processing performed on the image by the mobile phone may be set. When the user sliding strength control 422 is detected, the strength of face thinning processing is not zero, and the face thinning function is started by the mobile phone.
Similarly, after the user clicks the smooth function control 419, the intensity control 422 is slid again, and the intensity or level of the smoothing of the image by the mobile phone may be set. When the user is detected to slide the strength control 422, so that the intensity of the smoothing processing is not zero, the mobile phone starts the smoothing function.
Similarly, after the user clicks the skin color function control 421, the intensity control 422 is slid again, and the intensity or level of the skin color processing performed on the image by the mobile phone may be set. And when the user is detected to slide the strength control 422, so that the strength of skin color processing is not zero, starting the skin color function by the mobile phone. The skin color level is similar to the face thinning level and will not be described again.
It should be noted that the skin smoothing function, the face thinning function and the skin tone function can also be used in a superimposed manner. That is, the user can simultaneously turn on a plurality of skin beautifying functions, and when the mobile phone processes the image, the processed image can have the effects of the plurality of skin beautifying functions. For example: if the user has opened smooth function and thin face function, then the cell-phone will both carry out smooth processing to the image of gathering, carry out thin face again and handle. The face in the processed image has the effect of smoothing and thinning. Another example is: for example: if the user has started the smoothing function, the face thinning function, and the skin color function, then the mobile phone will perform smoothing, face thinning, and skin color adjustment on the acquired image. The face in the processed image has a smooth effect, a face-thinning effect and skin color changes.
Hereinafter, a method of performing face thinning processing on an image by a mobile phone after a user turns on a face thinning function among skin beautifying functions will be described.
After the face thinning function is started, the mobile phone uses the first camera to collect RGB images, and the second camera collects depth images. The mobile phone performs face recognition according to the RGB image, determines a target face according to the recognized face, and performs face thinning processing on the target face.
The number of the target faces can be one or more. In some examples, the number of target faces to be subjected to face thinning processing may be set in advance to be one in consideration of the processing efficiency of the mobile phone on the image.
Specifically, the mobile phone performs face recognition according to the RGB image acquired by the first camera in real time, and the specific implementation of face recognition may refer to the prior art, which is not described herein again. In some embodiments, when the mobile phone recognizes a face, the mobile phone may identify the recognized face, for example: the identified face is identified using the face box and displayed in the view box 404. Referring to an interface 423 shown in (5) of fig. 4, in the interface 423, a face box 424 is used to identify a recognized face. In still other embodiments, when the mobile phone does not recognize the face, the user may be prompted in the form of text or voice, etc., that the face is not currently recognized, so that the user can know or make corresponding adjustment. For example: the mobile phone can display, for example, a text prompt "face not currently recognized" through the prompt control 413.
It can be understood that after the camera application is opened, the mobile phone enters a photographing mode or a video recording mode, and then performs face recognition according to the RGB images acquired by the first camera. The mobile phone may also start face recognition according to the RGB image acquired by the first camera after entering the portrait mode, which is not limited in the embodiment of the present application.
In some embodiments of the application, when the mobile phone recognizes a face in the RGB image, it may further determine whether the recognized face meets a preset condition. After the preset conditions are met, the face can be determined to be the target face, and face thinning processing can be performed on the target face. For example: if the face in the RGB image is too large or too small, some face detail data may be missing, so that the subsequent mobile phone cannot acquire the face feature points and establish the face 3D model, and further face thinning processing cannot be performed or the face thinning processing effect is not good. Thus, the phone may determine whether the size of the identified face satisfies a predetermined condition, such as the identified face having an area greater than a first threshold (e.g., 20% of the area of the frame) and less than a second threshold (e.g., 90% of the area of the frame 404).
It can be understood that the mobile phone may start to determine whether the recognized face meets the preset condition after entering the portrait mode. The mobile phone can also start to determine whether the recognized face meets the preset condition after the face thinning function is started. The embodiments of the present application do not limit this.
In other embodiments of the present application, when the mobile phone recognizes that there is a face in the image, and the face meets a preset condition, the face may be determined to be a target face.
In still other embodiments of the present application, when the mobile phone recognizes that there are a plurality of faces in the image and the plurality of faces all satisfy the preset condition, the mobile phone may automatically determine one or a predetermined number of faces as the target face, for example: for example, the face with the largest area may be determined as the target face according to the area of the recognized face, or a predetermined number of faces with larger areas may be determined as the target faces. Another example is: it is possible to determine, for example, a face located at an intermediate position as the target face, or determine a predetermined number of faces located at an intermediate position or near the intermediate position as the target faces, based on the recognized face positions. Another example is: the face with the largest area, which is located at the middle position, for example, can be determined as the target face according to the area and the position of the recognized face. And further, performing corresponding face thinning processing according to the determined target face. In some examples, the mobile phone may further identify an automatically determined target face or use text prompting or other manners to prompt the user that the mobile phone will subsequently perform face thinning processing on the target face.
For example: as shown in (1) in fig. 5, an interface 501 is displayed after the face thinning function is turned on for the mobile phone, and a preview image is displayed in a finder frame of the interface 501. The mobile phone recognizes that there are a face 1 and a face 2 in the preview image. The mobile phone can automatically confirm the face 2 with the largest face area as the target face. The mobile phone can also identify the determined target face, for example: the identifier 502 identifies the face 2 as a target face to prompt the user that the face is the target face.
In still other examples, the cell phone may also prompt the user to manually select a target face when multiple faces are identified in the image.
For example: as shown in the picture taking interface 503 in (2) of fig. 5, the mobile phone may prompt the user to manually select the target face through a text prompt, for example, displaying text 504. The user can select the target face by clicking the area where the target face is located, or by selecting the area where the target face is located in a frame and the like. After detecting the selection operation of the user, the mobile phone determines that the corresponding face is the target face.
For another example: as shown in (3) of fig. 5, the mobile phone may also pop up a list box, such as: a list box 506 prompts the user to select a target face. Alternatively, as the photographing interface 507 shown in fig. 5 (4), the mobile phone may also pop up a list box, for example: a list box 508 prompts the user to select a target face. The user may select the target face by selecting it in the list box or selection box. After detecting the selection operation of the user, the mobile phone determines that the corresponding face is the target face.
In addition, the mobile phone may also prompt the user through voice prompt, animation and other manners, which are not described herein.
In some embodiments of the application, after the target face is determined, and the user does not click the shooting control 409, the mobile phone may modify the contour line of the target face according to the RGB image acquired by the first camera in real time, and display the modified image in the view finder. That is, the user can preview the image after the face thinning process in real time through the finder frame. For example: in the interface 425 shown in (6) in fig. 4, the contour of the target face in the finder frame has been modified. In this way, the user can see the face-thinning effect from the viewfinder before clicking the shooting control 409.
Specifically, for a target face in an RGB image acquired by the first camera in real time, the detection of the feature points of the face is performed, for example, a dlib open source algorithm may be adopted to detect a specific number (for example, 68) of feature points of the face. The contour line of the target face is determined according to the detected face characteristic points, and the contour line is modified, so that the contour of the target face is reduced, and the face-thinning effect is achieved.
In other embodiments of the application, after the target face is determined, and when the user does not click the shooting control 409, the mobile phone may further perform face thinning processing according to the RGB image acquired by the first camera in real time and the depth image acquired by the second camera in real time. Namely, a 3D face model corresponding to a target face is constructed, a specific area in the face in an image is further determined according to the 3D face model, and the skin color of the specific area is shaded according to the principle that people make up the face, so that the face thinning effect is achieved. And displaying the image after the face thinning processing in a view frame in real time.
That is, the specific region of the face in the image displayed in the finder frame at this time is shaded. In this way, the user can see the face-thinning effect from the viewfinder before clicking the shooting control 409. That is, the user can preview the image after the face thinning process in real time through the finder frame.
Illustratively, the specific region may include a region 1 and a region 2. Wherein, the area 1 is an area needing skin color brightening, such as: nasal bridge region, forehead region, etc. Region 2 is a region where skin tone brightness needs to be reduced, such as: the outer contour region and the alar region, etc. Shading the skin tone of the particular region may be, for example: and fusing the original image of the area 1 with the highlight template to increase the transparency of the area 1 and brighten the skin color. The original image of the area 2 is fused with the shadow template to reduce the transparency of the area 2 and reduce the skin color brightness. The shaded image is the face-thinned image.
For example: the example is illustrated with region 2 comprising an outer contour region and region 1 comprising a nasal bridge region. Then, the interface 601 shown in (1) in fig. 6 is a photographing interface when the face-thinning function is not turned on (or the face-thinning strength is zero). The interface 601 displays an image 3 in the view box, and the image 3 can be understood as an original image captured by a camera. On the interface 601, the mobile phone detects that the user slides the slider in the face-thinning level control, and sets the face-thinning level to be non-zero, for example: the face-thinning level is 5, the mobile phone starts the face-thinning function, and an interface 602 shown in (2) in fig. 6 is displayed. At this time, an image 4 is displayed in the view frame of the interface 602, and the image 4 is processed by the face thinning method provided by the present application. For convenience of explanation, the part for adjusting the skin color of the human face in the original image is identified by shading in the figure.
In contrast to image 3, it is apparent that the face shape (face contour line) of the human face in image 4 is the same as that of image 3, that is, image 4 is the same as that of a real person. However, the skin color of a partial region of the human face in the image 4 is different from that in the image 3. Specifically, the skin color of the area 2 is darker than the skin color in the image 1, and the skin color of the area 1 is brighter than the skin color of the image 1. Therefore, the image 4 looks more three-dimensional than the face in the image 3, and has a face-thinning effect. And the real contour line of the human face is reserved, so that the shot image is more real and natural.
The specific area may be area 1 and area 2 preset in the mobile phone. That is, the mobile phone automatically processes the preset area 1 and area 2.
The specific area may be an area that is automatically selected and determined by the mobile phone according to the preset area 1 and area 2. For example: region 1 includes the forehead region and the nasal bridge region. When the mobile phone detects that the brightness of the nose bridge region reaches the threshold value, the brightness of the nose bridge region can be considered to be bright enough, and the processing can be omitted. Thus, the cell phone can only process forehead region and region 2 in region 1.
The specific area may also be an area set by a user.
For example: referring to an interface 701 shown in (1) in fig. 7, a user may select an area to be processed when the mobile phone is used for face thinning processing through an option box 702. Another example is: referring to the interface 703 shown in (2) in fig. 7, the user can select or click on an area of the face to be processed, for example, by a frame. The embodiment of the application does not limit the way in which the user sets the specific area.
In still other embodiments of the present application, after the target face is determined, and the user does not click the shooting control 409, the mobile phone processes the image by using both a face-thinning method for adjusting the brightness of the specific area and a face-thinning method for modifying the contour line of the face, and displays the processed image in the view frame in real time. That is, a specific region of the face in the image displayed in the finder frame at this time is shaded, and the contour of the face is also modified. In this way, the user can see the face-thinning effect from the viewfinder before clicking the shooting control 409. That is, the user can preview the image after the face thinning process in real time through the finder frame.
In still other embodiments of the present application, after the target face is determined, and when the user does not click the shooting control 409, the mobile phone may not perform face thinning on the RGB image acquired by the first camera in real time, that is, the image that is not subjected to face thinning is displayed in the finder frame. That is, the user may not preview the face-thinned image in real time through the finder frame. For example: in the interface 423 shown in (5) in fig. 4, the contour line of the target face in the finder frame is not modified.
After the face-thinning function is opened at the cell-phone, the user can also set up the face-thinning degree of cell-phone to the image through face-thinning rank controlling part, and the cell-phone carries out the face-thinning of different degrees to the image according to the different face-thinning grades that the user set up and handles.
The face-thinning level may represent the degree or intensity of face thinning, or may be understood as the degree of difference between the processed image and the original image. Specifically, in the face thinning method for adjusting the brightness of the skin color in the specific area of the face, the face thinning level may reflect the adjustment strength of the brightness of the skin color in the specific area of the face. The higher the face thinning level is, the greater the adjustment intensity of the brightness of the skin color of the specific area in the face is, that is, the greater the difference between the skin color of the image after face thinning processing and the skin color of the original image is. For example: the brighter the skin tone in the nasal bridge region or forehead, the darker the skin tone in the alar region or outer contour region. The lower the face thinning level is, the smaller the adjustment intensity of the brightness of the skin color of the specific area in the human face is, that is, the smaller the difference between the skin color of the image after face thinning processing and the skin color of the original image is. In the face thinning method for modifying the contour line in the human face, the face thinning level may reflect the inward contraction strength of the contour line in the human face. The higher the face thinning level is, the more the contour lines in the face shrink inward, i.e., the smaller the contour of the face-thinned image is compared with the contour of the original image.
Taking a face thinning method for adjusting the brightness of skin color in a specific area in a human face as an example, different image effects processed at different face thinning levels are explained.
For example: the example is illustrated with region 2 comprising an outer contour region and region 1 comprising a nasal bridge region.
As already indicated above, the interface 601 shown in (1) in fig. 6 is an interface displayed when the face thinning function is not turned on, i.e., the face thinning level is set to zero. The image 3 displayed in the view finder in the interface 601 is an image without face thinning processing, and can be understood as an original image collected by a camera. As shown in (2) of fig. 6, the interface 602 is an interface when the face-thinning function is turned on for the mobile phone and the face-thinning level is a lower level (for example, the face-thinning level is set to 5). The image 4 displayed in the view-finding frame in the interface 602 is an image after the face-thinning processing is performed on the mobile phone, and the strength of the face-thinning processing of the mobile phone is low.
The interface 603 shown in (3) in fig. 6 is an interface when the face-thinning level is set to a higher level (for example, the face-thinning level is set to 9) for the mobile phone. The image 5 displayed in the view finder in the interface 602 is an image after the face thinning processing is performed on the mobile phone, and the strength of the face thinning processing of the mobile phone is high.
For convenience of explanation, the part for adjusting the skin color of the human face in the original image is identified by shading, and the difference from the original image is reflected by the density of oblique lines in the shading.
Comparing the images 3, 4 and 5 shows that the face shapes (face contour lines) of the faces in these three images are the same. That is, image 4 and image 5, are both the same as the face of a real person. Only the skin tone brightness of the partial region of the human face in the three images is different.
Contrast image 3 and image 4: the skin tone of region 1 in image 4 is lighter than the skin tone of region 1 in image 3 and the skin tone of region 2 in image 4 is darker than the skin tone of region 2 in image 3. Thus, image 4 appears more stereoscopic than the face in image 3 visually, and the face appears thinner.
Comparison of image 4 and image 5: the skin tone of region 1 in image 5 is lighter than the skin tone of region 1 in image 4 and the skin tone of region 2 in image 5 is darker than the skin tone of region 2 in image 4. Thus, the image 5 appears more stereoscopic than the face in the image 4 visually, and the face appears thinner.
Therefore, the higher the face thinning level is, the stronger the intensity of image processing by the mobile phone is, and the face looks thinner in the processed image.
When the mobile phone detects that the user switches the operation of the camera before and after previewing, the image presented in the viewfinder may change. The operation of switching the front and rear cameras by the user may be, for example, the user clicks the front and rear camera switching control 408, or the user switches through voice, which is not limited in the embodiment of the present application.
For example: the camera (for example, the front camera) currently used by the mobile phone comprises a second camera, so that the mobile phone can acquire a depth image, and the mobile phone can adopt a face thinning method for adjusting the brightness of a specific area in a human face. Then, the mobile phone may process the image acquired in real time by using a face thinning method for adjusting the brightness of a specific area in the face, and display the processed image in the view finder.
After the camera is switched by the mobile phone, if no second camera is arranged in the switched camera (for example, a rear camera), the mobile phone cannot acquire a depth image and cannot adopt a face thinning method for adjusting the brightness of a specific area in a human face. Namely, the mobile phone can only use a face thinning method for modifying the contour line of the face to process the image acquired in real time and display the processed image in the view-finding frame.
In some embodiments of the application, after detecting that the user switches the operation of the front camera and the back camera, the mobile phone may display corresponding prompt information. For example: and prompting the user that no second camera exists in the switched cameras, and the face thinning method for adjusting the brightness of the specific area in the face cannot be used. Another example is: the user is prompted that a second camera is arranged in the switched camera, and a face thinning method for adjusting the brightness of a specific area in the face can be used.
In other embodiments of the present application, after detecting that the user switches the operation of the front and rear cameras, the mobile phone may also automatically determine which face-thinning method to use for processing according to whether the currently used camera includes the second camera.
For example: the camera (for example, a front camera) currently used by the mobile phone includes a second camera, so that the mobile phone can acquire a depth image, and the mobile phone can determine to process the image acquired in real time by using a face thinning method for adjusting the brightness of a specific area in a human face and/or a face thinning method for modifying the contour line of the human face, and display the processed image in a view finder.
After the mobile phone switches the cameras, if no second camera is arranged in the switched cameras (for example, the rear camera), the mobile phone cannot acquire depth images, and the mobile phone can determine to process the images acquired in real time by using a face thinning method for modifying the contour lines of human faces and display the processed images in a view frame.
When the mobile phone detects a shooting instruction, the mobile phone executes a shooting operation, the current RGB image is collected through the first camera, the current depth image is collected through the second camera, and face thinning processing is carried out on the RGB image and the depth image collected at the moment.
The shooting instruction may be a shooting instruction operation of the mobile phone detecting the user, and the shooting instruction operation of the user may be, for example, clicking the shooting control 409, or issuing a voice command for shooting, or pressing a volume key, other preset operations, and the like. The mobile phone can also automatically execute the photographing operation after a preset time period. In the embodiment of the application, the manner of triggering the photographing operation by the mobile phone is not limited.
It should be noted that, when the mobile phone starts the photographing function, that is, enters the photographing mode, the video recording mode or the portrait mode, only the first camera may be started, and the first camera acquires RGB images in real time. After the face thinning function is started, the second camera is started by the mobile phone, and the depth image is collected by the second camera, so that the electric quantity of the mobile phone is saved. Optionally, the mobile phone may also open the first camera and the second camera when entering the portrait mode, that is, before the mobile phone opens the face slimming function, the first camera and the second camera are already opened. The embodiment of the present application does not limit this.
In some examples, the mobile phone may perform image processing by using a face-thinning method for adjusting the brightness of a specific region in a face, so as to visually enhance the stereoscopic impression of the face in the image and achieve the face-thinning effect. For a specific face thinning method for adjusting the brightness of a specific area in a human face, reference may be made to the above description, which is not repeated herein.
In other examples, the mobile phone may process the image by using a face thinning method for adjusting the brightness of a specific area in the face and combining with a method for modifying a face contour line, so as to further improve the face thinning effect of the face in the image.
It should be noted that the face-thinning method adopted by the mobile phone when performing the photographing operation may be the same as or different from the face-thinning method adopted by the mobile phone when previewing.
For example: when the mobile phone previews, a face thinning method for modifying the face contour line in the image is adopted for the image acquired in real time. When the mobile phone performs the photographing operation, two face thinning methods are combined for the currently acquired image, namely, a face thinning method for adjusting the brightness of a specific area in a human face is combined with a face thinning method for modifying the contour line of the human face.
Another example is: when the mobile phone previews, a face thinning method for modifying the face contour line in the image is adopted for the image acquired in real time. When the mobile phone performs the photographing operation, a face thinning method for adjusting the brightness of a specific area in a human face is adopted for the currently acquired image.
After the face thinning processing is performed on the target face in the image, the mobile phone can store the image after the face thinning processing. In some examples, the emailed images may be identified or stored in an album by default. For example: an interface 801 shown in (1) in fig. 8 is a browsing interface of photographs. The picture 802 is an image of the mobile phone after face thinning processing. Another example is: an interface 803 shown in (2) in fig. 8 is a browsing interface of the album. The photo album 804 can be used for storing images after face thinning processing of the mobile phone.
In other examples, the mobile phone may also save the image without face thinning, i.e., save the original image. That is, after photographing, two images are obtained, one image being an image that has not been subjected to face thinning processing, and the other image being an image that has been subjected to face thinning processing. The embodiment of the present application does not limit this.
In some embodiments of the present application, after the mobile phone performs the photographing operation, the mobile phone may return to the photographing interface of the portrait mode. For example: see interface 426 shown in (7) of FIG. 4. In the interface, a thumbnail of an image shot last by the mobile phone is displayed in the image thumbnail 424. At this time, the mobile phone continues to acquire the RGB image and the depth image in real time. The settings of the mobile phone still keep the settings of the last shooting, namely, the settings of the light spot function, the light effect function and the skin beautifying function, wherein the settings of the skin beautifying function comprise a smoothing function, a face thinning function, a skin color function and the like. The user can directly use the setting of the last shooting to shoot again, and can change the corresponding setting through the corresponding control again. The embodiments of the present application are not limited.
When the mobile phone is in other modes such as continuous shooting, slow motion shooting, video recording and the like, the face thinning method provided by the application can be adopted to perform face thinning processing on the face contained in the image. For example: when the mobile phone continuously shoots, the camera acquires a plurality of images, and the mobile phone can perform face thinning processing on the face in each image to obtain a plurality of images after face thinning processing. Another example is: when the mobile phone shoots slow motion or records videos, the slow motion and the videos are composed of one frame of image, face thinning processing can be carried out on the face contained in each frame of image, and then the images after face thinning processing are composed into new slow motion and new videos.
If the image stored in the mobile phone or the image received from other devices contains depth data, the face thinning method provided by the application can be adopted to perform face thinning processing on the human face in the image.
The above embodiment is described by taking the example of turning on the face slimming function through the sub-menu of the skin beautifying function in the portrait mode, and there may be other ways of turning on the face slimming function, which are exemplified by several ways.
In an embodiment of the present application, after the mobile phone enters the photographing interface, as shown in (1) in fig. 9, the photographing interface 901 of the mobile phone includes a control indicating to enter the portrait mode. When it is detected that the user clicks a control indicating entering the portrait mode, or after it is detected that the user slides right on the finder frame, the mobile phone enters the portrait mode, and a portrait mode photographing interface 902 shown in (2) in fig. 9 is displayed.
A control 903 for instructing to start the face thinning function may be displayed in the portrait mode photographing interface 902. When the mobile phone detects that the user clicks the control 903 indicating to start the face thinning function on the portrait mode photographing interface 902, the mobile phone starts the face thinning function.
In another embodiment of the present application, as shown in (3) of fig. 9, a schematic diagram of a photographing interface 904 of another mobile phone is shown. A control 905 for starting a face-thinning function (which may also be referred to as indicating to enter a face-thinning mode) is displayed in the photographing interface 904. Similarly, when the mobile phone is in the photographing mode or the video recording mode, after it is detected that the user clicks the control 905 for opening the face slimming function, or after it is detected that the user slides leftwards (or rightwards) on the view finder, the mobile phone opens the face slimming function, and displays the photographing interface 906 in the face slimming mode as shown in (4) in fig. 9.
In another embodiment of the present application, as shown in (1) of fig. 10, a schematic diagram of a photographing interface 1001 of another mobile phone is shown. A photographing setting control 1002 is displayed in the photographing interface 1001. When it is detected that the user clicks the photographing setting control 1002, the cellular phone displays the photographing setting interface 1002 as shown in (2) in fig. 10. The photographing setting interface 1002 includes a control 1003 for turning on a face thinning function. When the user is detected to click the control 1003 for opening the face slimming function, the mobile phone opens the face slimming function, that is, displays a photographing interface 1004 shown in (3) in fig. 10.
In another embodiment of the present application, as shown in (4) in fig. 10, a schematic diagram of a photo setting interface 1005 for another mobile phone is provided. The photo settings interface 1005 includes a control 1006 that indicates entry into portrait mode. When detecting that the user clicks the control 1006 indicating to enter the portrait mode, the mobile phone enters the portrait mode photographing mode, that is, a portrait mode photographing interface 1007 shown in (5) in fig. 10 is displayed. In the portrait mode photographing interface 1007, a control 1008 for turning on a face thinning function is displayed. When the mobile phone detects that the user clicks the control 1008 for starting the face slimming function on the portrait mode photographing interface 1007, the mobile phone starts the face slimming function.
In another embodiment of the present application, the mobile phone may turn on the face thinning function after detecting the predefined gesture. For example: when a predefined gesture of the user on the photo interface 1101 shown in (1) in fig. 11 is detected, the predefined gesture may be, for example, a gesture of sliding up in the viewfinder frame, and the like, the mobile phone enters the portrait mode photo interface 1102 shown in (2) in fig. 11, or the mobile phone directly turns on the face thinning function and enters the photo interface 1103 shown in (3) in fig. 11. Wherein. The portrait mode interface 1102 displays a control for opening the face-thinning function, and the face-thinning function can be opened through the control for opening the face-thinning function.
In another embodiment of the present application, when the mobile phone is in the photographing mode, the portrait mode, or the video mode, the first camera of the mobile phone may collect RGB images in real time, and display the collected RGB images on the view finder. The mobile phone can carry out face recognition on the RGB image collected by the first camera, and can prompt a user to start a face thinning function when the fact that a face exists in the collected RGB image is determined. In some examples, the mobile phone may prompt the user to start the face slimming function in a text or voice manner, and then the user may start the face slimming function in any of the above manners. In other examples, the mobile phone may display a control for turning on the face thinning function in the current interface (the photographing interface in the photographing mode, or the portrait mode or the video recording mode). When detecting that the user clicks the control for opening the face slimming function, the mobile phone opens the face slimming function.
For example: after a human face is detected, the displayed control for starting the face-thinning function may be the control 1202 for starting the face-thinning function in the photographing interface 1201 shown in (1) in fig. 12. The control 1204 for opening the face thinning function at the terminal of the photographing interface 1203 may be also used as shown in (2) in fig. 12.
Hereinafter, a detailed description will be made of the above-mentioned method of constructing a 3D face model of a target face from an RGB image and a depth image, and determining a specific region from the 3D face model.
It should be noted that, the time stamps of the captured RGB image and the depth image need to be synchronized, that is, the captured RGB image and the depth image should be images taken at the same time or within a very short time. The reason is that in the shooting process, the images acquired by the first camera and the second camera are changed in real time, and only if the acquired RGB image and the depth image are shot at the same time or within a very short period of time, it can be considered that the two images are shot by the same image, and it is accurate to construct a 3D face model by using the RGB image and the depth image synchronized by the time stamp.
When a 3D face model of a target face is constructed, the detection of face feature points may be performed on RGB images first, that is, key feature points of the face, such as eyes, nose tip, mouth corner points, eyebrows, and contour points of each part of the face, are automatically located according to the RGB images. The method for detecting the human face feature point can refer to the prior art, and is not described herein again.
Then, inputting the human face feature points determined from the RGB image into a standard human face 3D model, for example: and (3) obtaining a human face 3D model corresponding to a rough target human face by using a three-dimensional deformation model (3 DMM).
The standard human face 3D model is a human face 3D model obtained by inputting massive human face pictures serving as a training set into a machine learning model for training. The standard 3D model of the face represents the average level of the face, and the model includes a plurality of points (the points have three-dimensional coordinates) and lines formed between the points. In the embodiment of the application, some bone points can be defined in the points in the standard human face 3D model, and points and lines corresponding to the specific region can be determined according to the bone points.
For example: in the application, points and lines where the nose bones are located can be defined in a standard human face 3D model in advance, and further points and lines corresponding to the nose bridge region and the nose wing region are defined according to the points and lines where the nose bones are located.
For another example: in the application, the points of the arch, the cheekbones and the chin can be defined in the standard face 3D model in advance. The inner contour line can be defined as the line segment of the eyebrow peak in the eyebrow arch which is vertically downward and at the same horizontal position with the cheekbones. The mandible line is a line segment from the terminal of the inner contour line to the depression of the chin and is located at the same horizontal position. The outer contour line is the outer contour line of a standard human face 3D model. The area between the left inner contour line and the left mandible line and the left outer contour line, and the area between the right inner contour line and the right mandible line and the right outer contour line can be defined as outer contour areas, and points and lines corresponding to the outer contour areas are further determined.
Another example is: in the application, a point of a forehead dune can be defined in a standard human face 3D model in advance. Further, points and lines corresponding to the forehead area are defined according to the points of the forehead.
Note that the face feature points are included in points in a standard face 3D model. Therefore, the face feature points determined by the RGB image of the target face are input into the standard face 3D model, which can be understood as transforming the standard face 3D model according to the determined face feature points to obtain a rough face 3D model corresponding to the target face. For example: the face feature points are determined according to the RGB images, the distance between two corners of the face is 15 cm, the distance between two corners of the face in the standard face 3D model is 12 cm, therefore, the standard face 3D model needs to be deformed to obtain the face 3D model corresponding to the target face, and the distance between two corners of the face in the face 3D model corresponding to the target face is 15 cm.
Further, since the human face feature point is data obtained from an RGB image, it includes only two-dimensional data. And each point in the standard human face 3D model is three-dimensional data. Therefore, the obtained 3D model of the face corresponding to the rough target face needs to be further deformed according to the depth data in the depth image.
And then, filtering, hole filling and the like are carried out on the depth image, and then the depth image is fused with the human face 3D model corresponding to the rough target human face obtained in the previous step, so that the accurate human face 3D model corresponding to the target human face is obtained.
When the depth image is collected, due to the change of illumination, the infrared reflection characteristic of the material on the surface of the object and other reasons, the obtained depth image has holes and noise, so that filtering and hole filling operations are required before the depth image is used.
After obtaining the face 3D model corresponding to the target face, a specific region in the target face may be determined according to predefined bone points and regions, for example: nasal bridge region, alar region, outer contour region, forehead region, and the like. Further, these specific regions may be processed.
Hereinafter, a method of shading a skin color in a specific region will be described in detail.
The specific area includes an area 1 and an area 2, where the area 1 is an area that needs to lighten skin color, such as: nasal bridge region, forehead region, etc. Region 2 is a region where skin tone brightness needs to be reduced, such as: the outer contour region and the alar region, etc.
The brightness of the skin color is understood to be the brightness of the image. Generally, the color/hue of an image is commonly represented by luminance, which is the property of a color excluding luminance, and chrominance, which reflects the hue and saturation of the color, and luminance, which refers to the brightness of the color. Therefore, the adjustment of the brightness of the skin color in a specific area includes processing the brightness and/or chromaticity.
For example: when the image adopts an RGB format, the color of each pixel is obtained by superposing an R value, a G value and a B value. When adjusting the brightness of a specific area, it can be understood as transforming the original image. Assume that the value of a pixel in the original image is f (i, j), where (i, j) represents the spatial position of the pixel. The value of the converted pixel is g (i, j). A transformation formula may be employed: g (i, j) ═ a × f (i, j) + b. Wherein the coefficient a affects the contrast of the image and the coefficient b affects the brightness of the image.
When a is 1, it is the original image. When a >1, the contrast is enhanced and the image appears sharper. When a <1, the contrast decreases and the image appears dark. Therefore, in some embodiments of the present application, the shading of the image may be adjusted by adjusting the a value.
The coefficient b (typically greater than 0) affects the brightness of the image. Thus, in other embodiments of the present application, the image may be brightened by increasing the value of b. The image can also be darkened by reducing the value of b.
It can be understood that the brightness of the entire image is an effect of a combined effect of the brightness, the chromaticity, and the like of the image, and therefore, when adjusting the brightness of the image, it is not excluded to adjust other parameters of the image.
In general, in an RGB image, the higher the R value, G value, and B value, the higher the luminance of the image. The lower the R value, G value, and B value, the lower the brightness of the image.
For another example: when the image adopts the HSL format, the color of each pixel is obtained by superimposing a Hue (Hue, H) value, a Saturation (Saturation, S), and a Lightness (L). Where the L value is typically used to reflect the brightness of the image. In some embodiments of the present application, the brightness of the image may be adjusted by adjusting the magnitude of the L value. The L value becomes large and the image becomes bright. The L value becomes small and the image becomes dark.
It can be understood that the brightness of the entire image is an effect of a combined effect of the brightness, the chromaticity, and the like of the image, and therefore, when adjusting the brightness of the image, it is not excluded to adjust other parameters of the image.
It should be noted that, since the RGB color model and the HSL color model can be converted to each other, if the image uses the RGB color model, the image can also be converted to the HSL color model, and the brightness is adjusted by using the HSL brightness adjustment method and then converted back to the RGB color model. The brightness adjustment effect on the RGB image can also be achieved, which is not limited in the embodiment of the present application.
Another example is: when the image adopts YUV format, the color of each pixel is obtained by superposing a Y value, a U value and a V value. Wherein Y represents brightness (Luma), U represents Chroma (Chroma), and V represents concentration (Chroma). The Y value is typically used to reflect the brightness of the image. In some embodiments of the present application, the brightness of the image may be adjusted by adjusting the magnitude of the Y value. The Y value becomes large and the image becomes bright. The Y value becomes small and the image becomes dark.
It can be understood that the brightness of the entire image is an effect of a combined effect of the brightness, the chromaticity, and the like of the image, and therefore, when adjusting the brightness of the image, it is not excluded to adjust other parameters of the image.
For example, when adjusting the brightness of a specific area, image processing techniques such as a filter or an alpha fusion technique may be used. The embodiment of the present application does not limit a specific processing method.
The filter can adjust the chromaticity, the brightness, the hue and the like, and can also include superimposed textures and the like, and a certain color system can be adjusted in a targeted manner by adjusting the chromaticity and the hue to be thickened or lightened or change the hue, while other color systems are unchanged. The filter can also be understood as a pixel-to-pixel mapping, and the pixel value of the input image is mapped to the pixel value of the target pixel through a preset mapping table, so as to realize a special effect. It will be appreciated that the color related parameters in the filter may be set following the adjustment methods mentioned above.
Hereinafter, a specific implementation of shading a skin color in a specific region will be described by taking an alpha fusion technique as an example.
It has been mentioned above that in an RGB image, the higher the R value, G value, and B value, the higher the luminance of the image. The lower the R value, G value, and B value, the lower the brightness of the image. Therefore, the brightness value of the specific area can be increased or decreased by changing the RGB values (R value, G value and B value).
In some embodiments of the present application, the original image of the area 1 may be fused with a highlight template to increase the transparency of the area 1, and to lighten the skin color, that is, to increase the brightness value of each pixel in the area 1. The original image of the area 2 is fused with the shadow template to reduce the transparency of the area 2 and reduce the skin color brightness, that is, the brightness value of each pixel in the area 2 is reduced.
The color values (i.e., RGB values, including R values, G values, and B values) of the pixels in the image after the image is fused can be obtained according to formula 1, as follows:
OutPutColor ═ RGBSrc Ksrc) + (RGBSst ═ Kdst) (equation 1)
Wherein, OutPutColor is a color value of the fused image, RGBsrc is a color value of the original image (RGB image), and Ksrc is a fusion coefficient of the original image. RGBSrc is the color value of the highlight or shadow template used, and Kdst is the fusion coefficient of the highlight or shadow template.
Where Ksrc is inversely proportional to Kdst, related to the set lean face level.
For example: when RGBsrc is a color value of the highlight template used, the color value of the highlight template is generally large, i.e., the luminance value is large, compared to the original image. Therefore, after the formula 1 is adopted, the color value of each pixel in the fused image is increased, the brightness value is increased, and the fused image is lightened.
In addition, Kdst is proportional to the set level of lean face, and Ksrc is inversely proportional to the set level of lean face. That is, when the set face-thinning level is higher, the fused image is brighter.
When RGBsrc is the color value of the shadow template used, the color value of the shadow template is generally small, i.e., the luminance value is small, compared to the original image. Therefore, after the formula 1 is adopted, the color value of each pixel in the fused image is also reduced, the brightness value is reduced, and the fused image becomes dark.
In addition, Kdst is proportional to the set level of lean face, and Ksrc is inversely proportional to the set level of lean face. That is, when the set face-thinning level is higher, the fused image is darker.
For another example: in combination with the practical example, after the mobile phone enters the portrait photographing mode, when the face thinning function is not turned on, photographing is performed to obtain an image 6, and the image 6 can be regarded as an original image without face thinning processing. Then, the face thinning function is started, the rest settings of the mobile phone are not changed, the position, the environment and the like of the shooting object are not changed, the mobile phone shoots again to obtain an image 7, and the image 7 is an image processed by adopting the face thinning method provided by the embodiment of the application. For example, the above alpha fusion technique may be used to process the original image collected during photographing. Specifically, the brightness value of the region 1 in the image can be increased by increasing the RGB values of the region 1 (e.g., the nose bridge region, the forehead region) in the image, so as to achieve the effect of increasing the brightness of the region 1. Similarly, the brightness of the area 2 in the image can be reduced by reducing the RGB values of the area 2 (e.g., the nose wing area, the outer contour area) in the image 7, so as to achieve the effect of reducing the brightness of the area 2.
Comparing image 7 and image 6 reveals that: the luminance value of region 1 in image 7 is greater than the luminance value of region 1 in image 6; the luminance value of region 2 in image 7 is less than the luminance value of region 2 in image 6; the luminance values of the other regions in image 7 are equal (or less different) to the luminance values of the corresponding regions in image 6.
It is understood that the above-mentioned alpha fusion technique is to adjust the brightness of a specific region by changing the RGB values. The brightness of an image is an effect of a combined effect of the brightness, the chromaticity, and the like of the image. Therefore, when the luminance of an image is adjusted, the chromaticity of the image may be changed, but the amount of change in chromaticity or the like is not large. That is, after the above alpha fusion, when the image 7 is converted from the RGB space to the HSL space (or YUV space), it can be seen that: the luminance value-L (or Y value) of the region 1 in the image 7 is larger than that of the image 6, and the other parameter values of the region 1 are unchanged or slightly changed. The luminance value-L value (or Y value) of the area 2 becomes small, and the other parameter values of the area 2 are not changed or are changed slightly. The values of the individual parameters of the other regions do not change or change only slightly.
It will be appreciated that one or more of the above modules or units may be implemented in software, hardware or a combination of both. When any of the above modules or units are implemented in software, which is present as computer program instructions and stored in a memory, a processor may be used to execute the program instructions and implement the above method flows. The processor may include, but is not limited to, at least one of: various computing devices that run software, such as a Central Processing Unit (CPU), a microprocessor, a Digital Signal Processor (DSP), a Microcontroller (MCU), or an artificial intelligence processor, may each include one or more cores for executing software instructions to perform operations or processing. The processor may be built in an SoC (system on chip) or an Application Specific Integrated Circuit (ASIC), or may be a separate semiconductor chip. The processor may further include a necessary hardware accelerator such as a Field Programmable Gate Array (FPGA), a PLD (programmable logic device), or a logic circuit for implementing a dedicated logic operation, in addition to a core for executing software instructions to perform an operation or a process.
When the above modules or units are implemented in hardware, the hardware may be any one or any combination of a CPU, a microprocessor, a DSP, an MCU, an artificial intelligence processor, an ASIC, an SoC, an FPGA, a PLD, a dedicated digital circuit, a hardware accelerator, or a discrete device that is not integrated, which may run necessary software or is independent of software to perform the above method flows.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or make a contribution to the prior art, or all or part of the technical solutions may be implemented in the form of a software product stored in a storage medium and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: flash memory, removable hard drive, read only memory, random access memory, magnetic or optical disk, and the like.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (20)

1. A method of image processing, comprising:
detecting a first operation input by a user, and in response to the first operation, opening a camera by the electronic equipment and displaying a photographing interface; wherein the photographing interface comprises a viewing frame;
detecting a second operation input by the user, and responding to the second operation, starting a first function by the electronic equipment;
displaying a first image in the view frame according to an image acquired by a camera, wherein the first image comprises a face of a target shooting object;
determining a first area and a second area in the face of the target photographic object;
detecting a third operation input by the user, and in response to the third operation, photographing by the electronic equipment according to an image acquired by a camera to generate a second image, wherein the second image comprises a face of the target photographic object;
the second image is an image generated by the electronic device after processing an original image according to the first function, where the original image is generated by photographing a face of the target photographic object when the electronic device does not turn on the first function, a luminance value of a first region of the face in the second image is larger than a luminance value of the first region of the face in the original image, a luminance value of a second region of the face in the second image is smaller than a luminance value of the second region of the face in the original image, and a luminance value of a third region of the face in the second image is kept unchanged relative to a luminance value of the third region of the face in the original image.
2. The method of image processing according to claim 1, wherein the first region is at least one of a nasal bridge region and a forehead region, and the second region is at least one of a wing of nose region and an outer contour region.
3. The method of claim 1 or 2, wherein the electronic device generating the second image after processing the original image according to the first function comprises:
and modifying the contour line of the face in the original image.
4. The method according to any one of claims 1 to 3, wherein the contour of the face in the first image is modified during the displaying of the first image in the viewfinder from the image captured by the camera.
5. The method of image processing according to any one of claims 1 to 4, further comprising:
in the process that a third operation input by the user is detected and a second image is generated by the electronic device according to the image collected by the camera in response to the third operation, the image processing method further includes:
if the image acquired by the camera comprises the faces of at least two shooting objects, the electronic equipment automatically determines the face of the target shooting object, or determines the face of the target shooting object according to a fourth operation input by the user.
6. The image processing method of claim 5, wherein the electronic device automatically determining the face of the target photographic subject comprises:
and the electronic equipment determines the face of the target shooting object according to the area or the position of each face in the faces of the at least two shooting objects.
7. A method of image processing, comprising:
detecting a first operation input by a user, and in response to the first operation, opening a camera by the electronic equipment and displaying a photographing interface; wherein the photographing interface comprises a viewing frame;
detecting a second operation input by the user, and responding to the second operation, starting a first function by the electronic equipment;
displaying a first image in the view frame according to an image acquired by a camera, wherein the first image comprises a face of a target shooting object;
determining a first area and a second area in the face of the target photographic object;
detecting a third operation input by the user, and in response to the third operation, photographing by the electronic equipment according to an image acquired by a camera to generate a second image, wherein the second image comprises a face of the target photographic object;
the second image is an image generated by the electronic device after processing the image acquired by the camera according to the first function, wherein a brightness value of a first region of the face in the second image is increased, a brightness value of a second region of the face in the second image is decreased, and a brightness value of a third region of the face in the second image is kept unchanged.
8. The method of image processing according to claim 7, wherein the first region is at least one of a nasal bridge region and a forehead region, and the second region is at least one of a wing of nose region and an outer contour region.
9. The image processing method according to claim 7 or 8, wherein in a process of generating the second image by the electronic device after processing the image acquired by the camera according to the first function, an outline of the face in the second image is modified.
10. The method according to any of claims 7-9, wherein the contour of the face in the first image is modified during the displaying of the first image within the viewfinder from the image captured by the camera.
11. The method of image processing according to any one of claims 7-10, further comprising:
in the process that a third operation input by the user is detected and a second image is generated by the electronic device according to the image collected by the camera in response to the third operation, the image processing method further includes:
if the image acquired by the camera comprises the faces of at least two shooting objects, the electronic equipment automatically determines the face of the target shooting object, or determines the face of the target shooting object according to a fourth operation input by the user.
12. The image processing method of claim 11, wherein the electronic device automatically determining the face of the target photographic subject comprises:
and the electronic equipment determines the face of the target shooting object according to the area or the position of each face in the faces of the at least two shooting objects.
13. An electronic device, comprising: at least one camera, a processor, a memory, and a touch screen, the at least one camera, the memory, and the touch screen coupled with the processor, the memory for storing computer program code, the computer program code comprising computer instructions that, when read from the memory by the processor, cause the electronic device to:
detecting a first operation input by a user, responding to the first operation, opening a camera by the electronic equipment, and displaying a photographing interface on the touch screen; wherein the photographing interface comprises a viewing frame;
detecting a second operation input by the user, and responding to the second operation, starting a first function by the electronic equipment;
displaying a first image in the view frame according to the image acquired by the at least one camera, wherein the first image comprises a face of a target shooting object;
determining a first area and a second area in the face of the target photographic object;
detecting a third operation input by the user, and in response to the third operation, photographing by the electronic equipment according to an image acquired by the at least one camera to generate a second image, wherein the second image comprises a face of the target photographic object;
the second image is an image generated by the electronic device after processing an original image according to the first function, where the original image is generated by photographing a face of the target photographic object when the electronic device does not turn on the first function, a luminance value of a first region of the face in the second image is larger than a luminance value of the first region of the face in the original image, a luminance value of a second region of the face in the second image is smaller than a luminance value of the second region of the face in the original image, and a luminance value of a third region of the face in the second image is kept unchanged relative to a luminance value of the third region of the face in the original image.
14. The electronic device of claim 13, wherein the first region is at least one of a nasal bridge region and a forehead region, and the second region is at least one of a nasal wing region and an outer contour region.
15. The electronic device of claim 13 or 14, wherein the electronic device generates the second image after processing the original image according to the first function comprises:
and modifying the contour line of the face in the original image.
16. The electronic device according to any of claims 13-15, wherein during the process of displaying the first image in the view box by the electronic device according to the image captured by the at least one camera, the contour of the face in the first image is modified.
17. An electronic device, comprising: at least one camera, a processor, a memory, and a touch screen, the memory and the touch screen coupled to the processor, the memory for storing computer program code, the computer program code comprising computer instructions that, when read from the memory by the processor, cause the electronic device to:
detecting a first operation input by a user, responding to the first operation, opening a camera by the electronic equipment, and displaying a photographing interface on the touch screen; wherein the photographing interface comprises a viewing frame;
detecting a second operation input by the user, and responding to the second operation, starting a first function by the electronic equipment;
displaying a first image in the view frame according to the image acquired by the at least one camera, wherein the first image comprises a face of a target shooting object;
determining a first area and a second area in the face of the target photographic object;
detecting a third operation input by the user, and in response to the third operation, photographing by the electronic equipment according to an image acquired by the at least one camera to generate a second image, wherein the second image comprises a face of the target photographic object;
the second image is an image generated by the electronic device after processing the image acquired by the at least one camera according to the first function, wherein a brightness value of a first region of the face in the second image is increased, a brightness value of a second region of the face in the second image is decreased, and a brightness value of a third region of the face in the second image is kept unchanged.
18. The electronic device according to claim 17, wherein in a process of generating the second image by the electronic device after processing the image acquired by the at least one camera according to the first function, an outline of the face in the second image is modified.
19. The electronic device according to claim 17 or 18, wherein during the process of displaying the first image in the view box by the electronic device according to the image acquired by the at least one camera, the contour line of the face in the first image is modified.
20. The electronic device of any of claims 17-19,
in the process that the third operation input by the user is detected, and the electronic equipment generates a second image by taking a picture according to the image acquired by the at least one camera in response to the third operation,
if the image acquired by the at least one camera comprises the faces of at least two shooting objects, the electronic equipment automatically determines the face of the target shooting object, or determines the face of the target shooting object according to a fourth operation input by the user.
CN201811544123.6A 2018-12-17 2018-12-17 Image processing method and electronic equipment Pending CN111327814A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811544123.6A CN111327814A (en) 2018-12-17 2018-12-17 Image processing method and electronic equipment
PCT/CN2019/122837 WO2020125410A1 (en) 2018-12-17 2019-12-04 Image processing method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811544123.6A CN111327814A (en) 2018-12-17 2018-12-17 Image processing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN111327814A true CN111327814A (en) 2020-06-23

Family

ID=71100732

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811544123.6A Pending CN111327814A (en) 2018-12-17 2018-12-17 Image processing method and electronic equipment

Country Status (2)

Country Link
CN (1) CN111327814A (en)
WO (1) WO2020125410A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112465910A (en) * 2020-11-26 2021-03-09 成都新希望金融信息有限公司 Target shooting distance obtaining method and device, storage medium and electronic equipment
CN113099107A (en) * 2021-02-26 2021-07-09 无锡闻泰信息技术有限公司 Video shooting method, device and medium based on terminal and computer equipment
CN113435445A (en) * 2021-07-05 2021-09-24 深圳市鹰硕技术有限公司 Image over-optimization automatic correction method and device
CN113473013A (en) * 2021-06-30 2021-10-01 展讯通信(天津)有限公司 Display method and device for beautifying effect of image and terminal equipment
CN113486714A (en) * 2021-06-03 2021-10-08 荣耀终端有限公司 Image processing method and electronic equipment
CN113891009A (en) * 2021-06-25 2022-01-04 荣耀终端有限公司 Exposure adjusting method and related equipment
CN113923372A (en) * 2021-06-25 2022-01-11 荣耀终端有限公司 Exposure adjusting method and related equipment
WO2022042671A1 (en) * 2020-08-31 2022-03-03 展讯通信(上海)有限公司 Wearable device and image signal processing apparatus thereof
CN114429506A (en) * 2022-01-28 2022-05-03 北京字跳网络技术有限公司 Image processing method, apparatus, device, storage medium, and program product
CN114625292A (en) * 2020-11-27 2022-06-14 华为技术有限公司 Icon setting method and electronic equipment
CN115640414A (en) * 2022-08-10 2023-01-24 荣耀终端有限公司 Image display method and electronic equipment
CN115767290A (en) * 2022-09-28 2023-03-07 荣耀终端有限公司 Image processing method and electronic device
CN116048323A (en) * 2022-05-27 2023-05-02 荣耀终端有限公司 Image processing method and electronic equipment

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112492205B (en) * 2020-11-30 2023-05-09 维沃移动通信(杭州)有限公司 Image preview method and device and electronic equipment
CN114827442B (en) * 2021-01-29 2023-07-11 华为技术有限公司 Method for generating image and electronic equipment
CN115484393B (en) * 2021-06-16 2023-11-17 荣耀终端有限公司 Abnormality prompting method and electronic equipment
CN113421211B (en) * 2021-06-18 2024-03-12 Oppo广东移动通信有限公司 Method for blurring light spots, terminal equipment and storage medium
CN113627328A (en) * 2021-08-10 2021-11-09 安谋科技(中国)有限公司 Electronic device, image recognition method thereof, system on chip, and medium
CN116052236A (en) * 2022-08-04 2023-05-02 荣耀终端有限公司 Face detection processing engine, shooting method and equipment related to face detection
CN117119291A (en) * 2023-02-06 2023-11-24 荣耀终端有限公司 Picture mode switching method and electronic equipment
CN116363538B (en) * 2023-06-01 2023-08-01 贵州交投高新科技有限公司 Bridge detection method and system based on unmanned aerial vehicle

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010177779A (en) * 2009-01-27 2010-08-12 Canon Inc Imaging apparatus, control method, and program
CN103607537A (en) * 2013-10-31 2014-02-26 北京智谷睿拓技术服务有限公司 Control method of camera and the camera
CN107730445A (en) * 2017-10-31 2018-02-23 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108846807A (en) * 2018-05-23 2018-11-20 Oppo广东移动通信有限公司 Light efficiency processing method, device, terminal and computer readable storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8248482B2 (en) * 2008-05-15 2012-08-21 Samsung Electronics Co., Ltd. Digital camera personalization
CN104992402B (en) * 2015-07-02 2019-04-09 Oppo广东移动通信有限公司 A kind of U.S. face processing method and processing device
CN106998423A (en) * 2016-01-26 2017-08-01 宇龙计算机通信科技(深圳)有限公司 Image processing method and device
CN107038680B (en) * 2017-03-14 2020-10-16 武汉斗鱼网络科技有限公司 Self-adaptive illumination beautifying method and system
CN107592457B (en) * 2017-09-08 2020-05-15 维沃移动通信有限公司 Beautifying method and mobile terminal
CN108320266A (en) * 2018-02-09 2018-07-24 北京小米移动软件有限公司 A kind of method and apparatus generating U.S. face picture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010177779A (en) * 2009-01-27 2010-08-12 Canon Inc Imaging apparatus, control method, and program
CN103607537A (en) * 2013-10-31 2014-02-26 北京智谷睿拓技术服务有限公司 Control method of camera and the camera
CN107730445A (en) * 2017-10-31 2018-02-23 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108846807A (en) * 2018-05-23 2018-11-20 Oppo广东移动通信有限公司 Light efficiency processing method, device, terminal and computer readable storage medium

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022042671A1 (en) * 2020-08-31 2022-03-03 展讯通信(上海)有限公司 Wearable device and image signal processing apparatus thereof
CN112465910B (en) * 2020-11-26 2021-12-28 成都新希望金融信息有限公司 Target shooting distance obtaining method and device, storage medium and electronic equipment
CN112465910A (en) * 2020-11-26 2021-03-09 成都新希望金融信息有限公司 Target shooting distance obtaining method and device, storage medium and electronic equipment
CN114625292A (en) * 2020-11-27 2022-06-14 华为技术有限公司 Icon setting method and electronic equipment
CN113099107A (en) * 2021-02-26 2021-07-09 无锡闻泰信息技术有限公司 Video shooting method, device and medium based on terminal and computer equipment
CN113486714B (en) * 2021-06-03 2022-09-02 荣耀终端有限公司 Image processing method and electronic equipment
CN113486714A (en) * 2021-06-03 2021-10-08 荣耀终端有限公司 Image processing method and electronic equipment
CN113891009A (en) * 2021-06-25 2022-01-04 荣耀终端有限公司 Exposure adjusting method and related equipment
CN113923372A (en) * 2021-06-25 2022-01-11 荣耀终端有限公司 Exposure adjusting method and related equipment
CN113891009B (en) * 2021-06-25 2022-09-30 荣耀终端有限公司 Exposure adjusting method and related equipment
CN113473013A (en) * 2021-06-30 2021-10-01 展讯通信(天津)有限公司 Display method and device for beautifying effect of image and terminal equipment
CN113435445A (en) * 2021-07-05 2021-09-24 深圳市鹰硕技术有限公司 Image over-optimization automatic correction method and device
CN114429506A (en) * 2022-01-28 2022-05-03 北京字跳网络技术有限公司 Image processing method, apparatus, device, storage medium, and program product
CN114429506B (en) * 2022-01-28 2024-02-06 北京字跳网络技术有限公司 Image processing method, apparatus, device, storage medium, and program product
CN116048323A (en) * 2022-05-27 2023-05-02 荣耀终端有限公司 Image processing method and electronic equipment
CN116048323B (en) * 2022-05-27 2023-11-24 荣耀终端有限公司 Image processing method and electronic equipment
CN115640414A (en) * 2022-08-10 2023-01-24 荣耀终端有限公司 Image display method and electronic equipment
CN115640414B (en) * 2022-08-10 2023-09-26 荣耀终端有限公司 Image display method and electronic device
CN115767290A (en) * 2022-09-28 2023-03-07 荣耀终端有限公司 Image processing method and electronic device
CN115767290B (en) * 2022-09-28 2023-09-29 荣耀终端有限公司 Image processing method and electronic device

Also Published As

Publication number Publication date
WO2020125410A1 (en) 2020-06-25

Similar Documents

Publication Publication Date Title
CN111327814A (en) Image processing method and electronic equipment
CN112532857B (en) Shooting method and equipment for delayed photography
CN109951633B (en) Method for shooting moon and electronic equipment
CN109496423B (en) Image display method in shooting scene and electronic equipment
CN113132620B (en) Image shooting method and related device
CN110231905B (en) Screen capturing method and electronic equipment
CN112130742B (en) Full screen display method and device of mobile terminal
CN113475057B (en) Video frame rate control method and related device
CN112262563B (en) Image processing method and electronic device
EP4050883A1 (en) Photographing method and electronic device
CN112532892B (en) Image processing method and electronic device
CN113542580B (en) Method and device for removing light spots of glasses and electronic equipment
CN113810603B (en) Point light source image detection method and electronic equipment
WO2023015991A1 (en) Photography method, electronic device, and computer readable storage medium
CN113935898A (en) Image processing method, system, electronic device and computer readable storage medium
CN113170037A (en) Method for shooting long exposure image and electronic equipment
CN112449101A (en) Shooting method and electronic equipment
CN115967851A (en) Quick photographing method, electronic device and computer readable storage medium
CN113923372B (en) Exposure adjusting method and related equipment
CN114079725B (en) Video anti-shake method, terminal device, and computer-readable storage medium
CN114283195A (en) Method for generating dynamic image, electronic device and readable storage medium
CN113542574A (en) Shooting preview method under zooming, terminal, storage medium and electronic equipment
CN113497888A (en) Photo preview method, electronic device and storage medium
CN114245011A (en) Image processing method, user interface and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200623

RJ01 Rejection of invention patent application after publication