WO2020125410A1 - Procédé de traitement d'image et dispositif électronique - Google Patents

Procédé de traitement d'image et dispositif électronique Download PDF

Info

Publication number
WO2020125410A1
WO2020125410A1 PCT/CN2019/122837 CN2019122837W WO2020125410A1 WO 2020125410 A1 WO2020125410 A1 WO 2020125410A1 CN 2019122837 W CN2019122837 W CN 2019122837W WO 2020125410 A1 WO2020125410 A1 WO 2020125410A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
face
electronic device
camera
area
Prior art date
Application number
PCT/CN2019/122837
Other languages
English (en)
Chinese (zh)
Inventor
陈拓
吴磊
孙雨生
李阳
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2020125410A1 publication Critical patent/WO2020125410A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present application relates to the field of communication technology, and in particular, to an image processing method and an electronic device.
  • the shooting functions of cameras on electronic devices are becoming more and more abundant. Users' requirements for smart portrait beautification functions are also increasing. Among them, the thin face function is a function that users are extremely concerned about.
  • the electronic device can identify the contour points of the face in the image to determine the contour line under the face, and then move the pixel position of the contour line below to make the contour under the face in the image smaller. So as to achieve the effect of thin face.
  • the image 1 shown in (1) in FIG. 1 is an original image that has not undergone face-lift processing.
  • the contour line of the human face in image 1 is modified to obtain the face-lifted image shown in FIG. 1 ( 2) Image 2 shown.
  • An image processing method and electronic device provided by the present application can visually improve the stereoscopic effect of a human face by changing the lightness and darkness of a specific area in a human face in the image, achieve the effect of thinning the face, and improve the user experience.
  • an image processing method provided by an embodiment of the present application includes:
  • the first operation input by the user is detected, and in response to the first operation, the electronic device turns on the camera to display a photographing interface; wherein, the photographing interface includes a framing frame; the second operation detected by the user, in response to the second operation, the electronic device is turned on The first function; display the first image in the viewfinder according to the image collected by the camera, the first image includes the face of the target subject; determine the first and second areas in the face of the target subject; the user is detected
  • the input third operation in response to the third operation, the electronic device takes a picture according to the image collected by the camera to generate a second image, the second image includes the face of the target subject; wherein, the second image is the electronic device according to the first function
  • the original image refers to the image generated after the original image is processed.
  • the original image refers to the image generated by taking a picture of the face of the target subject when the electronic device does not enable the first function.
  • the brightness value of the first area of the face in the second image is relative to the original The brightness value of the first area of the face in the image is greater, and the brightness value of the second area of the face in the second image is smaller than the brightness value of the second area of the face in the original image.
  • the brightness value of the third area of the relative to the brightness value of the third area of the face in the original image remains unchanged.
  • the principle of daily make-up on people's faces is used to adjust the lightness and darkness of the skin color in these specific areas, so as to visually enhance the three-dimensional sense of the human face and achieve the effect of thinning the face.
  • the thin face processing of the present application does not change the face shape of the human face in the original image, the true outline of the human face is retained, which makes the captured image more realistic and natural.
  • the first region is at least one of a nose bridge region and a forehead region
  • the second region is at least one of a nose region and an outer contour region.
  • the first area is the area of the face of the target subject that needs to be increased in brightness
  • the second area is the area of the face of the target subject that needs to be reduced in brightness.
  • the electronic device generating the second image after processing the original image according to the first function includes modifying the contour line of the face in the original image.
  • the face-lifting method provided by the embodiments of the present application by adjusting the lightness and darkness of the skin tone in a specific area can also be combined with the face-lifting method that modifies the contour lines of the face in the image to jointly process the face in the image to achieve a thinner face Effect.
  • the contour line of the human face in the image is excessively modified, thereby causing the contour line of the human face to be unnatural and extremely different from the real person.
  • the contour line of the face in the first image is modified.
  • the effect of thin face is displayed in the viewfinder, which allows the user to see the effect of thin face, or prompts the user that the function is thin face, which is conducive to enhancing the user experience.
  • the method further includes: in the process of detecting the third operation input by the user, in response to the third operation, the electronic device generates a second image by photographing according to the image collected by the camera, the image processing
  • the image processing also includes:
  • the electronic device automatically determines the face of the target subject, or, according to the fourth operation input by the user, determines the face of the target subject.
  • the embodiments of the present application provide a method for determining the face of a target subject when the collected image contains faces of multiple subjects.
  • the electronic device automatically determines the face of the target subject includes:
  • the electronic device determines the face of the target subject according to the area or position of each of the faces of the at least two subjects.
  • the electronic device automatically determines the face of the target subject, which reduces the user's operation and helps improve the user experience.
  • an image processing method includes:
  • the first operation input by the user is detected, and in response to the first operation, the electronic device turns on the camera to display a photographing interface; wherein, the photographing interface includes a framing frame; the second operation detected by the user, in response to the second operation, the electronic device is turned on The first function; display the first image in the viewfinder according to the image collected by the camera, the first image includes the face of the target subject; determine the first and second areas in the face of the target subject; the user is detected
  • the input third operation in response to the third operation, the electronic device takes a picture according to the image collected by the camera to generate a second image, the second image includes the face of the target subject; wherein the second image is the electronic device according to the first function
  • the principle of daily make-up on people's faces is used to adjust the lightness and darkness of the skin color in these specific areas, so as to visually enhance the three-dimensional sense of the human face and achieve the effect of thinning the face.
  • the thin face processing of the present application does not change the face shape of the human face in the original image, the true outline of the human face is retained, which makes the captured image more realistic and natural.
  • the first region is at least one of a nose bridge region and a forehead region
  • the second region is at least one of a nose region and an outer contour region.
  • the contour line of the face in the second image is modified.
  • the contour line of the face in the first image is modified.
  • the method further includes:
  • the image processing method further includes:
  • the electronic device automatically determines the face of the target subject, or, according to the fourth operation input by the user, determines the face of the target subject.
  • the electronic device automatically determines the face of the target subject includes:
  • the electronic device determines the face of the target subject according to the area or position of each of the faces of the at least two subjects.
  • the present application provides an electronic device, comprising: at least one camera, processor, memory, and touch screen, at least one camera, memory, touch screen, and processor are coupled, the memory is used to store computer program code, and the computer program code includes a computer Instructions, when the processor reads computer instructions from the memory, so that the electronic device performs the following operations:
  • the electronic device turns on the camera and displays a photographing interface on the touch screen; wherein the photographing interface includes a framing frame; detecting the second operation input by the user, in response to the second operation, The first function is enabled by the electronic device; the first image is displayed in the viewfinder according to the image collected by at least one camera, and the first image includes the face of the target subject; the first area and the second part of the face of the target subject are determined Area; a third operation input by the user is detected, and in response to the third operation, the electronic device generates a second image by taking a picture based on the image collected by at least one camera, the second image including the face of the target subject; wherein, the second image It is an image generated by the electronic device after processing the original image according to the first function.
  • the original image refers to an image generated by taking a picture of the face of the target subject when the electronic device does not enable the first function, wherein the first of the faces in the second image
  • the brightness value of the area is larger than the brightness value of the first area of the face in the original image
  • the brightness value of the second area of the face in the second image is smaller than the brightness value of the second area of the face in the original image
  • the brightness value of the third area of the human face in the second image remains unchanged relative to the brightness value of the third area of the human face in the original image.
  • the principle of daily make-up on people's faces is used to adjust the lightness and darkness of the skin color in these specific areas, so as to visually enhance the three-dimensional sense of the human face and achieve the effect of thinning the face.
  • the thin face processing of the present application does not change the face shape of the human face in the original image, the true outline of the human face is retained, which makes the captured image more realistic and natural.
  • the first region is at least one of a nose bridge region and a forehead region
  • the second region is at least one of a nose region and an outer contour region.
  • the electronic device generating the second image after processing the original image according to the first function includes modifying the contour line of the face in the original image.
  • the contour line of the face in the first image is modified.
  • the electronic device after detecting the third operation input by the user, in response to the third operation, the electronic device generates a second image by taking a picture according to the image collected by the camera, if the image collected by the camera contains at least For the faces of the two subjects, the electronic device automatically determines the face of the target subject, or, according to the fourth operation input by the user, determines the face of the target subject.
  • the electronic device automatically determining the face of the target subject includes: the electronic device determines the face of the target subject according to the area or position of each of the faces of the at least two subjects.
  • the present application provides an electronic device, including: at least one camera, a processor, a memory, and a touch screen.
  • the memory, the touch screen and the processor are coupled.
  • the memory is used to store computer program code.
  • the computer program code includes computer instructions.
  • the computer reads computer instructions from the memory to enable the electronic device to perform the following operations:
  • the electronic device turns on the camera and displays a photographing interface on the touch screen; wherein the photographing interface includes a framing frame; detecting the second operation input by the user, in response to the second operation, The first function is enabled by the electronic device; the first image is displayed in the viewfinder according to the image collected by at least one camera, and the first image includes the face of the target subject; the first area and the second part of the face of the target subject are determined Area; a third operation input by the user is detected, and in response to the third operation, the electronic device takes a picture according to the image collected by at least one camera to generate a second image, the second image includes the face of the target subject; wherein the second image is The image generated by the electronic device after processing the image collected by at least one camera according to the first function, wherein the brightness value of the first area of the face in the second image is increased, and the brightness of the second area of the face in the second image The value is reduced, and the brightness value of the
  • the first region is at least one of a nose bridge region and a forehead region
  • the second region is at least one of a nose region and an outer contour region.
  • the outline of the face in the second image is modified.
  • the contour line of the face in the first image is modified.
  • the electronic device when a third operation input by the user is detected, in response to the third operation, the electronic device generates a second image by taking pictures according to the image collected by the at least one camera, if at least one camera collects The image of contains at least two faces of the subject, the electronic device automatically determines the face of the target subject, or, according to the fourth operation input by the user, determines the face of the target subject.
  • the electronic device automatically determining the face of the target subject includes: the electronic device determines the face of the target subject according to the area or position of each of the faces of the at least two subjects.
  • a computer storage medium includes computer instructions.
  • the terminal causes the terminal to perform the image processing method described in the first aspect and any possible implementation manner thereof.
  • a computer storage medium includes computer instructions.
  • the terminal causes the terminal to perform the image processing method described in the second aspect and any possible implementation manner thereof.
  • a computer program product when the computer program product runs on a computer, causes the computer to perform the image processing method as described in the first aspect and any possible implementation manner thereof.
  • An eighth aspect is a computer program product that, when run on a computer, causes the computer to perform the image processing method described in the second aspect and any possible implementation manner thereof.
  • FIG. 1 is a schematic diagram of a face-lifting method in the prior art
  • FIG. 2 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a software architecture of an electronic device provided by an embodiment of this application.
  • FIG. 4 is a schematic diagram of user interfaces of some electronic devices provided by embodiments of the present application.
  • FIG. 5 is a schematic diagram of a user interface of some electronic devices provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a user interface of some electronic devices provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a user interface of some electronic devices provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a user interface of some electronic devices provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a user interface of some electronic devices provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a user interface of some electronic devices provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a user interface of some electronic devices provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a user interface of some electronic devices provided by an embodiment of the present application.
  • first and second are used for description purposes only, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features.
  • the features defined as “first” and “second” may explicitly or implicitly include one or more of the features.
  • the meaning of “plurality” is two or more.
  • an image processing method is provided, which can be applied to an electronic device.
  • the RGB image data and depth image data of the subject can be obtained.
  • one or more specific areas in the human face are determined, for example, the nose bridge area, the nose wing area, the outer contour area, and the like.
  • the thin face processing of the present application does not change the face shape of the human face in the original image, the true outline of the human face is retained, which makes the captured image more realistic and natural.
  • the brightness of the skin color can be understood as the brightness of the image.
  • the color/color of an image is expressed by both brightness and chroma.
  • Chroma is the nature of a color excluding brightness, which reflects the hue and saturation of the color, and brightness refers to the brightness of the color. Therefore, the adjustment of the lightness and darkness of the skin color of a specific area includes the processing of brightness and/or chroma.
  • the face-lifting method provided by adjusting the lightness and darkness of the skin tone in a specific area can also be combined with the face-lifting method that modifies the contour lines of the face in the image to jointly perform on the face in the image Treatment to achieve a more thin face effect.
  • the contour line of the human face in the image is excessively modified, thereby causing the contour line of the human face to be unnatural and extremely different from the real person.
  • the electronic device in the present application may be a mobile phone, a tablet computer, a personal computer (Personal Computer, PC), a personal digital assistant (personal digital assistant (PDA), a netbook, a wearable electronic device, augmented reality technology (Augmented Reality) , AR) equipment, virtual reality (Virtual Reality, VR) equipment, vehicle-mounted equipment, smart cars, robots, etc., this application does not make special restrictions on the specific form of the electronic equipment.
  • augmented reality technology Augmented Reality
  • AR AR
  • VR Virtual Reality
  • vehicle-mounted equipment smart cars, robots, etc.
  • FIG. 2 shows a schematic structural diagram of the electronic device 100.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , Mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, key 190, motor 191, indicator 192, camera 193, display 194, and Subscriber identification module (SIM) card interface 195, etc.
  • SIM Subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than shown, or combine some components, or split some components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), and an image signal processor. (image)signal processor (ISP), controller, memory, video codec, digital signal processor (DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • the different processing units may be independent devices or may be integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 100.
  • the controller can generate the operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetch and execution.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory may store instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. The repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • Interfaces can include integrated circuit (inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit, sound, I2S) interface, pulse code modulation (pulse code modulation (PCM) interface, universal asynchronous transceiver (universal) asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and /Or universal serial bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (serial data line, SDA) and a serial clock line (derail clock line, SCL).
  • the processor 110 may include multiple sets of I2C buses.
  • the processor 110 may respectively couple the touch sensor 180K, charger, flash, camera 193, etc. through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface, and realize the touch function of the electronic device 100.
  • the I2S interface can be used for audio communication.
  • the processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, to realize the function of answering the phone call through the Bluetooth headset.
  • the PCM interface can also be used for audio communication, sampling, quantizing and encoding analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface to implement the function of answering the phone call through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 110 and the wireless communication module 160.
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 to peripheral devices such as the display screen 194 and the camera 193.
  • MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc.
  • the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the electronic device 100.
  • the processor 110 and the display screen 194 communicate through the DSI interface to realize the display function of the electronic device 100.
  • the GPIO interface can be configured via software.
  • the GPIO interface can be configured as a control signal or a data signal.
  • the GPIO interface may be used to connect the processor 110 to the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like.
  • GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that conforms to the USB standard specifications, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transfer data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through the headphones.
  • the interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiments of the present invention is only a schematic description, and does not constitute a limitation on the structure of the electronic device 100.
  • the electronic device 100 may also use different interface connection methods in the foregoing embodiments, or a combination of multiple interface connection methods.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 140 may receive wireless charging input through the wireless charging coil of the electronic device 100. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, internal memory 121, external memory, display screen 194, camera 193, wireless communication module 160, and the like.
  • the power management module 141 can also be used to monitor battery capacity, battery cycle times, battery health status (leakage, impedance) and other parameters.
  • the power management module 141 may also be disposed in the processor 110.
  • the power management module 141 and the charging management module 140 may also be set in the same device.
  • the wireless communication function of the electronic device 100 can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • Antenna 1 and antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 may provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 100.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), and the like.
  • the mobile communication module 150 can receive the electromagnetic wave from the antenna 1 and filter, amplify, etc. the received electromagnetic wave, and transmit it to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor and convert it to electromagnetic wave radiation through the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be transmitted into a high-frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to a speaker 170A, a receiver 170B, etc.), or displays an image or video through a display screen 194.
  • the modem processor may be a separate device.
  • the modem processor may be independent of the processor 110, and may be set in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (bluetooth, BT), and global navigation satellites that are applied to the electronic device 100 Wireless communication solutions such as global navigation (satellite system, GNSS), frequency modulation (FM), near field communication (NFC), infrared technology (infrared, IR), etc.
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives the electromagnetic wave via the antenna 2, frequency-modulates and filters the electromagnetic wave signal, and sends the processed signal to the processor 110.
  • the wireless communication module 160 can also receive the signal to be transmitted from the processor 110, frequency-modulate it, amplify it, and convert it to electromagnetic waves through the antenna 2 to radiate it out.
  • the antenna 1 of the electronic device 100 and the mobile communication module 150 are coupled, and the antenna 2 and the wireless communication module 160 are coupled so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include a global mobile communication system (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband Wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long-term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a beidou navigation system (BDS), and a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite-based augmentation system (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS beidou navigation system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation system
  • the electronic device 100 realizes a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, connecting the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations, and is used for graphics rendering.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos and the like.
  • the display screen 194 includes a display panel.
  • the display panel can use a liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active matrix organic light-emitting diode (active-matrix organic light-emitting diode) emitting diode, AMOLED, flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diode (QLED), etc.
  • the electronic device 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the electronic device 100 can realize a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP processes the data fed back by the camera 193. For example, when taking a picture, the shutter is opened, and light is transmitted to the photosensitive element of the camera through the lens, and the optical signal is converted into an electrical signal. The photosensitive element of the camera transmits the electrical signal to the ISP for processing and converts it into an image visible to the naked eye. ISP can also optimize the algorithm of image noise, brightness and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be set in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • the object generates an optical image through the lens and projects it onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (charge coupled device, CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CCD charge coupled device
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other format image signals.
  • the electronic device 100 may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • the mobile phone includes a first camera and a second camera, and the first camera and the second camera are located on the same side of the mobile phone, the front or back of the mobile phone.
  • the first camera and the second camera are both front cameras, or the first camera and the second camera are both rear cameras.
  • the first camera (one or more) and the second camera are used to acquire RGB images and depth (Depth) images.
  • the RGB image contains color information captured by the camera, where the pixel value may be R (red) G (green) B (blue) value.
  • Depth images are similar to grayscale images, except that each pixel value is the actual distance of the camera from the object.
  • the mobile phone includes two or more front cameras, where the first camera can be a COMOS camera in the front camera, and the second camera can be a structured light device or time of flight in the front camera , TOF) camera. Or the first camera and the second camera are both COMOS cameras.
  • the mobile phone includes two or more rear cameras, wherein the first camera may be a COMOS camera in the rear camera, and the second camera may be any of a structured light device or a TOF device in the rear camera Species. Or the first camera and the second camera are both COMOS cameras.
  • the digital signal processor is used to process digital signals. In addition to digital image signals, it can also process other digital signals. For example, when the electronic device 100 is selected at a frequency point, the digital signal processor is used to perform Fourier transform on the energy at the frequency point.
  • the video codec is used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in various encoding formats, for example: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • MPEG moving picture experts group
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the NPU can realize applications such as intelligent recognition of the electronic device 100, such as image recognition, face recognition, voice recognition, and text understanding.
  • the NPU can perform face feature point recognition on the RGB image, and then input the recognized face feature points into a standard 3D face model, and then establish a 3D person corresponding to the target face according to the depth image Face model. Further, a specific region of the human face can be determined according to the established 3D face model corresponding to the target human face, and the skin color of the specific region of the human face can be shaded.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes instructions stored in the internal memory 121 to execute various functional applications and data processing of the electronic device 100.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area may store an operating system, at least one function required application programs (such as sound playback function, image playback function, etc.).
  • the storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100 and the like.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and so on.
  • a non-volatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and so on.
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, a headphone interface 170D, and an application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and also used to convert analog audio input into digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
  • the speaker 170A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also known as “handset” is used to convert audio electrical signals into sound signals.
  • the microphone 170C also known as “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C through a person's mouth, and input a sound signal to the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C. In addition to collecting sound signals, it may also implement a noise reduction function.
  • the electronic device 100 may also be provided with three, four, or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the headset interface 170D is used to connect wired headsets.
  • the earphone interface 170D may be a USB interface 130, or a 3.5mm open mobile electronic device (open mobile terminal) platform (OMTP) standard interface, and the American Telecommunications Industry Association (cellular telecommunications industry association of the United States, CTIA) standard interface.
  • OMTP open mobile electronic device
  • CTIA American Telecommunications Industry Association
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be provided on the display screen 194.
  • the capacitive pressure sensor may be a parallel plate including at least two conductive materials. When force is applied to the pressure sensor 180A, the capacitance between the electrodes changes.
  • the electronic device 100 determines the strength of the pressure according to the change in capacitance.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position based on the detection signal of the pressure sensor 180A.
  • touch operations that act on the same touch position but have different touch operation intensities may correspond to different operation instructions. For example, when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the movement posture of the electronic device 100. In some embodiments, the angular velocity of the electronic device 100 around three axes (ie, x, y, and z axes) may be determined by the gyro sensor 180B.
  • the gyro sensor 180B can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyro sensor 180B detects the jitter angle of the electronic device 100, calculates the distance that the lens module needs to compensate based on the angle, and allows the lens to counteract the jitter of the electronic device 100 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 can detect the opening and closing of the flip holster using the magnetic sensor 180D.
  • the electronic device 100 may detect the opening and closing of the clamshell according to the magnetic sensor 180D.
  • characteristics such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to recognize the posture of electronic devices, and can be used in horizontal and vertical screen switching, pedometer and other applications.
  • the distance sensor 180F is used to measure the distance.
  • the electronic device 100 can measure the distance by infrared or laser. In some embodiments, when shooting scenes, the electronic device 100 may use the distance sensor 180F to measure distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 100 emits infrared light outward through the light emitting diode.
  • the electronic device 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100.
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 180L is used to sense the brightness of ambient light.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, access to application lock, fingerprint photo taking, fingerprint answering call, and the like.
  • the temperature sensor 180J is used to detect the temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs to reduce the performance of the processor located near the temperature sensor 180J in order to reduce power consumption and implement thermal protection.
  • the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 due to low temperature.
  • the electronic device 100 when the temperature is below another threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
  • Touch sensor 180K also known as "touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 constitute a touch screen, also called a "touch screen”.
  • the touch sensor 180K is used to detect a touch operation acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100, which is different from the location where the display screen 194 is located.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the pulse of the human body and receive a blood pressure beating signal.
  • the bone conduction sensor 180M may also be provided in the earphone and combined into a bone conduction earphone.
  • the audio module 170 may parse out the voice signal based on the vibration signal of the vibrating bone block of the voice part acquired by the bone conduction sensor 180M to realize the voice function.
  • the application processor may analyze the heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M to implement the heart rate detection function.
  • the key 190 includes a power-on key, a volume key, and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100.
  • the motor 191 may generate a vibration prompt.
  • the motor 191 can be used for vibration notification of incoming calls and can also be used for touch vibration feedback.
  • touch operations applied to different applications may correspond to different vibration feedback effects.
  • the motor 191 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminder, receiving information, alarm clock, game, etc.
  • Touch vibration feedback effect can also support customization.
  • the indicator 192 can be an indicator light, which can be used to indicate the charging state, the amount of power change, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be inserted into or removed from the SIM card interface 195 to achieve contact and separation with the electronic device 100.
  • the electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, and so on.
  • the same SIM card interface 195 can insert multiple cards at the same time. The types of the multiple cards may be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 can also be compatible with external memory cards.
  • the electronic device 100 interacts with the network through a SIM card to realize functions such as call and data communication.
  • the electronic device 100 uses eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
  • the software system of the electronic device 100 may adopt a layered architecture, event-driven architecture, micro-core architecture, micro-service architecture, or cloud architecture.
  • the embodiment of the present invention takes the Android system with a layered architecture as an example to exemplarily explain the software structure of the electronic device 100.
  • FIG. 3 is a software block diagram of the electronic device 100 according to an embodiment of the present invention.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor.
  • the layers communicate with each other through a software interface.
  • the Android system is divided into four layers, from top to bottom are the application layer, the application framework layer, the Android runtime and the system library, and the kernel layer.
  • the application layer may include a series of application packages.
  • the application layer may include application packages such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, and short message.
  • application packages such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, and short message.
  • the application framework layer provides an application programming interface (application programming interface) and programming framework for applications at the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and so on.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display screen, determine whether there is a status bar, lock the screen, intercept the screen, etc.
  • Content providers are used to store and retrieve data, and make these data accessible to applications.
  • the data may include videos, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
  • the view system includes visual controls, such as controls for displaying text and controls for displaying pictures.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface including an SMS notification icon may include a view to display text and a view to display pictures.
  • the phone manager is used to provide the communication function of the electronic device 100. For example, the management of the call status (including connection, hang up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a short stay without user interaction.
  • the notification manager is used to notify the completion of downloading, message reminders, etc.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window.
  • the text message is displayed in the status bar, a prompt sound is emitted, the electronic device vibrates, and the indicator light flashes.
  • Android Runtime includes core library and virtual machine. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library contains two parts: one part is the function function that Java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in the virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer into binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library may include multiple functional modules. For example: surface manager (surface manager), media library (Media library), 3D graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
  • surface manager surface manager
  • media library Media library
  • 3D graphics processing library for example: OpenGL ES
  • 2D graphics engine for example: SGL
  • the surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
  • the media library supports a variety of commonly used audio, video format playback and recording, and still image files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to realize 3D graphics drawing, image rendering, synthesis, and layer processing.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least the display driver, camera driver, audio driver, and sensor driver.
  • the following describes the workflow of the software and hardware of the electronic device 100 in combination with capturing a photographing scene.
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into original input events (including touch coordinates, time stamps and other information of touch operations).
  • the original input event is stored in the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, for example, the control corresponding to the click operation is a camera application icon.
  • the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer.
  • the camera 193 captures a still image or video.
  • the electronic device 100 having the above hardware structure and software architecture.
  • the electronic device is used as a mobile phone, and the specific implementation of the image processing method provided by the embodiments of the present application will be described in detail with reference to the drawings.
  • FIG. 4 it is a schematic diagram of a mobile phone desktop 401.
  • the desktop 401 displays a status bar, time and weather controls, application icons (such as the camera application icon 402 ), and a dock bar.
  • the status bar may include the name of the operator (for example, China Mobile), time, WiFi icon, signal strength, current remaining power, and so on.
  • the mobile phone When the mobile phone detects that the user clicks the icon 402 of the camera application in the desktop 401, the mobile phone starts the camera function and enters the main interface of the camera application, which may be referred to as a photographing interface of the mobile phone. It should be noted that the user can also enter the camera's camera interface in other ways, for example, by clicking the shortcut icon corresponding to the camera application in the lock screen interface, which is not limited in this embodiment of the present application.
  • FIG. 4 it is a schematic diagram of a mobile phone camera interface 403.
  • a photographing interface 403 a viewfinder frame 404, a control 405 for instructing to enter the photographing mode, a control 406 for instructing to enter the recording mode, a control 407 for instructing to enter the portrait mode, and a control 408 for instructing to switch the front and rear cameras ,
  • HDR High-Dynamic Range
  • the viewfinder 404 can preview the images collected by the mobile phone camera in real time.
  • the mobile phone enters the camera mode by default (or the mode it was in when the camera application was last exited).
  • the control 406 indicates to enter the recording mode
  • the mobile phone enters the recording mode.
  • the mobile phone enters the recording mode.
  • the mobile phone enters the recording mode.
  • the mobile phone enters the portrait mode, displaying the portrait as shown in (3) in FIG. 4 Mode photo interface 414.
  • Portrait mode means that when the front camera or rear camera of a mobile phone has two (or more than two) cameras, it can bring a background blurring camera function similar to a SLR camera. In portrait mode, it can make the focused subject shot by the camera more prominent, and the background image will be blurred, bringing a more beautiful effect. This is because one of the high-pixel cameras can be responsible for shooting people/object subjects, and the other camera can be responsible for blurring and blurring photos, thus combining a blurred background photo or video.
  • mobile phones have a front camera and a rear camera.
  • the front camera is on the front of the mobile phone.
  • the camera collects the image on the front of the mobile phone, which can be used for Selfie and face unlock.
  • the rear camera is located on the back of the phone.
  • the camera collects the image on the back of the phone, which can be used to shoot people, food, scenery, etc.
  • the mobile phone uses the front camera to collect images, when it is detected that the user clicks the instruction to switch the front and rear camera controls 408, the mobile phone uses the rear camera to collect images.
  • the mobile phone uses the rear camera to collect images, when it is detected that the user clicks the instruction to switch the front and rear camera controls 408, the mobile phone uses the front camera to collect images.
  • the mobile phone In the camera mode or portrait mode, when the mobile phone detects that the user clicks on the shooting control 409, the mobile phone performs a photographing operation; in the recording mode, when the mobile phone detects that the user clicks on the shooting control 409, the mobile phone performs a video shooting operation .
  • the portrait mode photographing interface 414 shown in (3) in FIG. 4 is displayed.
  • a light spot function control 415, a light effect function control 416, and a skin beauty function control 417 are displayed.
  • the light spot function is to blur the background (non-portrait part) of the image with different shapes of light spots, so that the portrait part of the image can be more prominent, and the picture is more beautiful.
  • the mobile phone displays options for different light spot effects, such as: circle, heart shape, straight line, etc., and the user can select different light spot effects.
  • the flare function is turned on, the mobile phone performs flare effect processing on the image collected in real time in the framing frame 404, and displays the processed image (the image with the flare effect) in the framing frame 404 in real time.
  • the light effect function is to add realistic light and shadow effects to the image, such as natural light, studio lighting, contour light, stage light, etc.
  • the mobile phone displays options for different light and shadow effects, and the user can select different light and shadow effects.
  • the light effect function is turned on, the mobile phone performs light and shadow effect processing on the image collected in real time in the framing frame 404, and displays the processed image (image with light and shadow effect) in the framing frame 404 in real time.
  • Skin beauty function is to process the face part of the image, including: smooth function, thin face function and skin color function.
  • the smooth function can smooth the skin of the face in the image.
  • Skin color function can adjust the skin color of the face in the image.
  • the thin face function can perform thin face processing on the human face in the image.
  • the light spot function, the light effect function and the skin beauty function can be used in a superimposed manner, that is to say, the user can simultaneously turn on the multiple light spot function, light effect function and skin beauty function. Then, when the mobile phone processes the image, the processed image can have the effects of these multiple functions. For example, if the user turns on the light spot function and skin beautification function, then the mobile phone will not only blur the background in the collected image, but also perform skin beautification on the characters in the collected image (including any of smooth, thin face and skin tone Kind or any kind). The processed image has both the background blur effect and the portrait processed effect.
  • the mobile phone will perform spot light treatment, light effect treatment, and skin beautification processing on the collected image.
  • the face in the processed image has the effect of background blur, light and shadow, and the effect of portrait processing.
  • the mobile phone When it is detected that the user clicks on the skin beauty function control 417, the mobile phone displays a sub-menu of the skin beauty function, that is, an interface 418 as shown in (4) in FIG. 4 is displayed, and the interface 418 may include a smooth function control 419 and a thin face function control 420 , Skin color function control 421, and intensity (or level) control 422.
  • a control for closing the sub-menu can also be displayed in the interface, so that the user can quickly close the sub-menu of the beautifying function, or display a return control, so that the user can quickly return To the interface of portrait mode, you can set the spot function and light effect function.
  • the mobile phone can not enable any function by default, or one or several functions by default, or the function selected by the user when the user quits the camera application last time.
  • the smooth function, the thin face function and the skin color function are not turned on, and the user is required to manually turn on the required function.
  • the thin face function is turned on by default, and the user can manually turn off the thin face function or turn on other functions.
  • the user can set the intensity of the face in the processed image by sliding the slider in the intensity (or level) control 422, for example: slide to the left to reduce the processed image (smooth processing, thin face processing, skin color processing Etc.) Strength or level. Swipe to the right to increase the processed image level.
  • the user may also set or input a numerical value (for example, 1 to 10, or 1 to 100, etc.) to set the processing image intensity. For example: the larger the value, the greater the intensity of the processed image. The smaller the value, the lower the intensity of the processed image. When the value is zero, no image processing is performed. For example, in interface 423 shown in (5) in FIG.
  • the face-reducing level is zero, that is, the face in the collected image is not subjected to face-reduction processing, that is, the image in the viewfinder of interface 423 is an image without face-reduction processing .
  • the mobile phone performs thin face processing on the human face in the collected image, as shown in the image in the framing frame of the interface 425 shown in (6) in FIG. 4.
  • the embodiment of the present application does not limit the specific form of setting a control for processing image intensity.
  • the mobile phone can also cancel the display of the intensity control 422 in certain scenes to prevent the intensity control 422 from blocking part of the image in the viewfinder. This embodiment of the present application does not specifically limit this.
  • the mobile phone may cancel the display intensity control 422 to avoid blocking the image in the viewfinder. At this stage, the mobile phone still saves the setting before the strength control 422 is cancelled. If the user needs to reset the intensity, he can click the corresponding function control, such as the thin face function control 420. At this time, the mobile phone displays the intensity control 422, and the user can set the intensity of the thin face by sliding the intensity control 422.
  • the mobile phone may also cancel the display intensity control 422 to avoid blocking the image in the viewfinder frame.
  • the user can click on the corresponding function control again, such as the thin face function control 420, and the mobile phone displays the intensity control 422 again.
  • the intensity control 422 is slid again to set the strength or level of the thin face processing of the image by the mobile phone.
  • the mobile phone enables the thin face function.
  • the strength or level of smoothing the image by the mobile phone can be set.
  • the mobile phone enables the smoothing function.
  • the intensity or level of the skin color processing of the image by the mobile phone can be set.
  • the mobile phone turns on the skin color function.
  • the skin tone level is similar to the thin face level, and will not be repeated here.
  • the smoothing function, face-lifting function and skin tone function in the skin beauty function can also be used in superposition.
  • the user can turn on multiple skin beautification functions at the same time.
  • the processed image can have the effects of the multiple skin beautification functions.
  • the mobile phone will perform both smoothing and thin face processing on the collected image.
  • the face in the processed image has both smooth effect and thin face effect.
  • the mobile phone will perform smoothing, face-lifting, and skin-tone adjustment processing on the collected image.
  • the face in the processed image has a smooth effect, a thin face effect, and a change in skin color.
  • a method of performing face-lift processing on an image by a mobile phone after the user enables the face-lift function in the skin beautification function.
  • the mobile phone uses the first camera to collect RGB images and the second camera to collect depth images.
  • the mobile phone first performs face recognition based on the RGB image, determines the target face according to the recognized face, and then performs face thinning processing on the target face.
  • the number of target faces for thin face processing may be preset to one.
  • the mobile phone performs face recognition according to the RGB image collected by the first camera in real time.
  • face recognition For specific implementation of face recognition, reference may be made to the existing technology, which will not be repeated here.
  • the identified human face may be identified, for example, the identified human face is identified by a human face frame, and displayed in the framing frame 404. Please refer to the interface 423 shown in (5) of FIG. 4. In this interface 423, the face 424 is used to identify the recognized face.
  • the mobile phone may be prompted by text or voice, etc.
  • the human face is not currently recognized, so that the user knows or makes corresponding adjustments.
  • the mobile phone can display, for example, a text prompt “the face is not currently recognized” through the prompt control 413.
  • the mobile phone after the mobile phone opens the camera application and enters the photographing mode or the video recording mode, it can perform face recognition according to the RGB image collected by the first camera.
  • the mobile phone may also start face recognition based on the RGB image collected by the first camera after entering the portrait mode, which is not limited in the embodiment of the present application.
  • the mobile phone when the mobile phone recognizes a human face in the RGB image, it can also determine whether the recognized human face meets a preset condition. Only after the preset conditions are met, the face can be determined as the target face, and the target face can be thinned. For example, if the face in the RGB image is too large or too small, some face detail data may be missing, resulting in the subsequent mobile phone unable to obtain face feature points, and the establishment of a 3D model of the face, which can not be processed thin face or thin face processing effect is not good.
  • the mobile phone can determine whether the size of the recognized face meets a preset condition, and the preset condition can be, for example, that the area of the recognized face is greater than the first threshold (for example: 20% of the area of the viewfinder) and less than the second The threshold (for example: 90% of the area of the framing frame 404).
  • the preset condition can be, for example, that the area of the recognized face is greater than the first threshold (for example: 20% of the area of the viewfinder) and less than the second The threshold (for example: 90% of the area of the framing frame 404).
  • the mobile phone can start to determine whether the recognized face meets the preset condition.
  • the mobile phone can also start to determine whether the recognized face meets the preset condition after the face-lifting function is turned on. This embodiment of the present application does not limit this.
  • the mobile phone when the mobile phone recognizes that there is a face in the image, and the face meets the preset condition, it may determine that the face is the target face.
  • the mobile phone when the mobile phone recognizes that there are multiple faces in the image, and the multiple faces meet the preset conditions, the mobile phone can automatically determine one or a predetermined number of personal faces as the target face, For example, the face with the largest area may be determined as the target face according to the area of the recognized face, or a predetermined number of personal faces with a larger area may be determined as the target face. For another example: according to the position of the recognized face, the face located in the middle position, for example, may be determined as the target face, or a predetermined number of face faces located in or near the middle position may be determined as the target face.
  • the face with the largest area for example, located in the middle position may be determined as the target face. Further, according to the determined target face, corresponding face thinning processing is performed. In some examples, the mobile phone may also identify the automatically determined target face or use text prompts, etc., to prompt the user that the mobile phone will subsequently perform face-lift processing on the target face.
  • an interface 501 is displayed after the thin-face function is turned on for the mobile phone, and a preview image is displayed in the viewfinder of the interface 501.
  • the mobile phone recognizes the human face 1 and the human face 2 in the preview image.
  • the mobile phone can automatically confirm that face 2 with the largest face area is the target face.
  • the mobile phone may also identify the determined target face, for example, the identifier 502 identifies the face 2 as the target face, to prompt the user that the face is the target face.
  • the mobile phone may also prompt the user to manually select the target face.
  • the mobile phone may display the text 504 in the form of a text prompt to prompt the user to manually select the target face.
  • the user can select the target face by clicking on the area where the target face is located or frame the area where the target face is located. After detecting the user's selection operation, the mobile phone determines the corresponding face as the target face.
  • the mobile phone may also pop up a list box, for example, a list box 506, prompting the user to select a target face.
  • the mobile phone may also pop up a list box, for example, a list box 508, prompting the user to select a target face. The user can select the target face in the list box or selection box. After detecting the user's selection operation, the mobile phone determines the corresponding face as the target face.
  • the mobile phone can also prompt the user through voice prompts, animations and other methods, which are not repeated here.
  • the mobile phone may modify the contour line of the target face according to the RGB image collected by the first camera in real time, and modify The resulting image is displayed in the viewfinder.
  • the user can preview the face-lifted image in real time through the viewfinder.
  • the outline of the target face in the view frame has been modified. In this way, before the shooting control 409 is clicked, the user can see the face-lift effect from the viewfinder.
  • face feature points are detected, for example, a dlib open source algorithm can be used to detect a specific number of face points (for example: 68) feature points .
  • the contour line of the target face is determined from the detected face feature points, and the contour line is modified to make the contour of the target face smaller and achieve the effect of thinning the face.
  • the mobile phone may also perform face-lift processing based on the RGB image collected in real time by the first camera and the depth image collected in real time by the second camera . That is, a 3D face model corresponding to the target face is constructed, and further a specific region in the face in the image is determined according to the 3D model of the face, and the skin color of the specific region is performed according to the principle of face makeup by people Light and dark treatment to achieve face-lift effect.
  • the face-lifted image is displayed in the viewfinder in real time.
  • the specific area of the face in the image displayed in the viewfinder frame is processed by shading.
  • the user can see the face-lift effect from the viewfinder.
  • the user can preview the face-lifted image in real time through the viewfinder.
  • the specific area may include area 1 and area 2.
  • the area 1 is an area that needs to brighten the skin color, such as the nose bridge area, the forehead area, and the like.
  • Area 2 is an area that needs to reduce the brightness of skin color, such as an outer contour area and a nose area.
  • the shading process of the skin color of the specific area may be, for example, fusing the original image of the area 1 with the highlight template to increase the transparency of the area 1 and brighten the skin color.
  • the original image of area 2 and the shadow template are fused to reduce the transparency of area 2 and the brightness of skin color.
  • the image after shading is the image after face reduction.
  • the region 2 includes an outer contour region and the region 1 includes a nose bridge region as an example.
  • the interface 601 shown in (1) in FIG. 6 is a photographing interface when the thin face function is not enabled (or the thin face strength is zero).
  • An image 3 is displayed in the viewfinder of the interface 601, and the image 3 can be understood as the original image collected by the camera.
  • the mobile phone detects that the user slides the slider in the thin face level control, and sets the thin face level to non-zero, for example: the thin face level is 5, the mobile phone enables the thin face function, and displays the interface shown in (2) in FIG. 6 602.
  • an image 4 is displayed in the framing frame of the interface 602, and the image 4 is an image processed by the face-lifting method provided by the present application.
  • the figure marks the part that adjusts the skin color of the face in the original image in a shaded manner.
  • the face shape (face contour line) of the human face in the image 4 is the same as the image 3, that is, the image 4 is the same as the face shape of the real person.
  • the skin color of some areas of the face in image 4 is different from that of image 3.
  • the skin tone of area 2 is darker than the skin tone in image 1, and the skin tone of area 1 is brighter than the skin tone of image 1. Therefore, from a visual point of view, the face in image 4 looks more three-dimensional than the face in image 3, which has the effect of thinning the face. And because the true outline of the face is retained, the captured image is more realistic and natural.
  • the specific area may be the area 1 and the area 2 preset in the mobile phone. That is, the mobile phone automatically processes the preset area 1 and area 2.
  • the above-mentioned specific area may also be an area that the mobile phone automatically selects and determines according to the preset area 1 and area 2.
  • Area 1 includes the forehead area and the nose bridge area. When the mobile phone detects that the brightness of the nose bridge area has reached the threshold, it can be considered that the brightness of the nose bridge area is sufficiently bright, and no processing is needed. Therefore, the mobile phone can only process the forehead area and the area 2 in the area 1.
  • the above specific area may also be an area set by the user.
  • the user can select the area to be processed when the mobile phone thin face processing is performed through the option box 702.
  • the interface 703 shown in (2) of FIG. 7 the user can select or click on the area in the face that needs to be processed by, for example, frame selection.
  • the embodiment of the present application does not limit the manner in which the user sets a specific area.
  • the mobile phone after determining the target face and the user does not click the shooting control 409, the mobile phone adopts both the face-lifting method for adjusting the shading of a specific area and the face-lifting method for modifying the outline of the face, Process the image and display the processed image in the viewfinder in real time.
  • the specific area of the human face in the image displayed in the framing frame is light and dark processed, and the outline of the human face is also modified.
  • the shooting control 409 before the shooting control 409 is clicked, the user can see the face-lift effect from the viewfinder. In other words, the user can preview the face-lifted image in real time through the viewfinder.
  • the mobile phone may not perform face-lift processing on the RGB image collected by the first camera in real time, that is, the face-lift is displayed without the face-lift Processed image.
  • the user cannot preview the face-lifted image in real time through the viewfinder.
  • the outline of the target face in the view frame is not modified.
  • the user can also set the thin face processing level of the image by the mobile phone through the thin face level control, and the mobile phone performs different face thin face processing on the image according to different thin face levels set by the user.
  • the thin face level may indicate the degree or intensity of the thin face, and may also be understood as the degree of difference between the processed image and the original image.
  • the thinning level may reflect the adjustment intensity of the lightness and darkness of the skin color of the specific area in the human face.
  • the higher the face-lifting level the greater the intensity of adjusting the lightness and darkness of the skin tone in a specific area of the human face, that is, the greater the difference between the skin tone of the image after the face-lifting process and the original image. For example: the brighter the skin color of the nose bridge area or the forehead, the darker the skin color of the nose area or the outer contour area.
  • the thin-face level can reflect the intensity of the inward contraction of the contour line in the human face.
  • the higher the thin face level the more the contour lines in the human face shrink inward, that is, the contour of the image after thin face processing is smaller than that of the original image.
  • the region 2 includes an outer contour region and the region 1 includes a nose bridge region as an example.
  • the interface 601 shown in (1) in FIG. 6 is an interface displayed when the thin face function is not enabled on the mobile phone, that is, when the thin face level is set to zero.
  • the image 3 displayed in the framing frame of the interface 601 is an image that has not undergone face reduction processing, and can be understood as the original image collected by the camera.
  • the interface 602 shown in (2) in FIG. 6 is an interface when the thin face function is enabled for the mobile phone and the thin face level is a lower level (for example, the thin face level is set to 5).
  • the image 4 displayed in the viewfinder frame of the interface 602 is an image after the mobile phone performs face-lift processing, and the intensity of the mobile phone face-lift processing is low.
  • the interface 603 shown in (3) in FIG. 6 is an interface when the thin face level is set to a higher level (for example, the thin face level is set to 9) for the mobile phone.
  • the image 5 displayed in the viewfinder frame of the interface 602 is an image after the mobile phone performs face-lift processing, and the mobile phone face-lift processing intensity is relatively high.
  • the figure marks the part of the original image that adjusts the skin color of the face in the form of a shadow, and reflects the difference from the original image with the density of the oblique lines in the shadow.
  • image 4 and image 5 Comparing image 3, image 4 and image 5, it can be seen that the facial shapes (face contours) of the human faces in these three images are the same. In other words, image 4 and image 5 are the same as the face of a real person. It's just that the skin tone and brightness of some areas of the human faces in these three images are different.
  • the image presented in the viewfinder may change.
  • the operation of the user to switch between the front and rear cameras may be, for example, the user clicks to switch the front and rear camera controls 408, or the user switches by voice, which is not limited in this embodiment of the present application.
  • the mobile phone can collect the depth image, and the mobile phone can adopt a thin face method for adjusting the lightness and darkness of a specific area in the human face. Then, the mobile phone can use the thin face method to adjust the lightness and darkness of specific areas in the human face to process the image collected in real time, and display the processed image in the viewfinder.
  • the mobile phone After the camera is switched by the mobile phone, and the switched camera (for example, the rear camera) does not have a second camera, the mobile phone cannot collect depth images, and the thin face method for adjusting the shading of specific areas in the human face cannot be used. That is, the mobile phone can only use the thin-face method of modifying the contour lines of the human face to process the image collected in real time, and display the processed image in the viewfinder.
  • the switched camera for example, the rear camera
  • the mobile phone cannot collect depth images, and the thin face method for adjusting the shading of specific areas in the human face cannot be used. That is, the mobile phone can only use the thin-face method of modifying the contour lines of the human face to process the image collected in real time, and display the processed image in the viewfinder.
  • the mobile phone may display corresponding prompt information. For example, the user is reminded that there is no second camera in the camera after switching, and the thin face method for adjusting the shading of a specific area in a human face cannot be used. For another example: the user is prompted to have a second camera in the switched camera, and a thin face method for adjusting the brightness of a specific area in a human face may be used.
  • the mobile phone may also automatically determine which face-lifting method is used for processing according to whether the currently used camera includes a second camera.
  • the camera currently used by the mobile phone includes the second camera
  • the mobile phone can collect the depth image
  • the mobile phone can determine the thin face method to adjust the shading of a specific area in the face and/or modify the contour of the face
  • the face-lifting method of lines processes the images collected in real time and displays the processed images in the viewfinder.
  • the mobile phone switches the camera, there is no second camera in the switched camera (for example: the rear camera), then the mobile phone cannot collect the depth image, and the mobile phone can determine to use the thin face method to modify the contour line of the human face for the real-time collected image. Process and display the processed image in the viewfinder.
  • the mobile phone When the mobile phone detects the shooting instruction, the mobile phone performs a photographing operation, collects the current RGB image through the first camera, and collects the current depth image through the second camera, and performs face-lift processing on the RGB image and the depth image collected at this time.
  • the shooting instruction may be a shooting instruction operation detected by the mobile phone.
  • the shooting instruction operation of the user may be, for example, clicking the shooting control 409, or issuing a voice command for shooting, or pressing a volume key, and other preset operations.
  • the mobile phone may also automatically perform a photographing operation after a preset time period. In the embodiments of the present application, the manner in which the mobile phone triggers the photographing operation is not limited.
  • the mobile phone when it starts the camera function, that is, when it enters the camera mode, the video mode, or the portrait mode, it can only turn on the first camera and collect RGB images in real time through the first camera.
  • the second camera is turned on after the mobile phone turns on the thin face function, and the second camera is used to collect depth images, which is conducive to saving the power of the mobile phone.
  • the mobile phone may also turn on the first camera and the second camera when entering the portrait mode, that is, the first camera and the second camera have been turned on before the mobile phone turns on the thin face function. This embodiment of the present application does not limit this.
  • the mobile phone may use a thin face adjustment method that adjusts the lightness and darkness of a specific area in the human face to perform image processing to visually enhance the stereoscopic effect of the human face in the image to achieve the face thinning effect.
  • a thin face adjustment method that adjusts the lightness and darkness of a specific area in the human face to perform image processing to visually enhance the stereoscopic effect of the human face in the image to achieve the face thinning effect.
  • the mobile phone may use a thin face adjustment method that adjusts the shading of specific areas in the face, and a method of modifying the outline of the face to process the image to further enhance the thin face effect of the face in the image.
  • the face-lifting method adopted by the mobile phone when performing the photographing operation may be the same as or different from the face-lifting method adopted by the mobile phone during the preview.
  • the thin face method of modifying the outline of the face in the image is adopted for the image collected in real time.
  • the mobile phone uses a combination of two face-lifting methods for the currently collected image, that is, a face-lifting method that adjusts the lightness and darkness of a specific area in the face and a face-lifting method that modifies the outline of the face.
  • the thin face method of modifying the contour line of the face in the image is adopted for the image collected in real time.
  • the mobile phone performs a photographing operation, it adopts a thin face method for adjusting the lightness and darkness of a specific area in a human face to the currently collected image.
  • the mobile phone After the mobile phone executes the face thinning process on the target face in the image, it can save the face thinned image.
  • the face-treated image may be identified, or the face-treated image may be stored in an album by default.
  • the interface 801 shown in (1) in FIG. 8 is a photo browsing interface.
  • the photo 802 is an image after the face-lift processing of the mobile phone.
  • the interface 803 shown in (2) in FIG. 8 is a browsing interface for the album.
  • the photo album 804 can be used to store the image after the face-lift processing of the mobile phone.
  • the mobile phone can also save the image that has not been thinned, that is, the original image. That is, after shooting, two images are obtained, one image is an image that has not undergone face-lift processing, and the other is an image that has undergone face-lift processing.
  • This embodiment of the present application does not limit this.
  • the mobile phone can return to the photographing interface in portrait mode.
  • the image thumbnail 424 displays the thumbnail of the last image captured by the mobile phone.
  • the mobile phone continues to collect RGB images and depth images in real time.
  • the settings of the mobile phone still save the settings of the last shooting, including the settings of the spot function, the light effect function, and the skin beautification function.
  • the settings of the skin beautification function include the smoothing function, the thin face function, and the skin tone function. The user can directly use the settings from the previous shooting to shoot again, or change the corresponding settings through the corresponding controls again.
  • the embodiments of the present application are not limited.
  • the face-lifting method provided in this application can also be used to perform face-lifting on the human faces contained in the image.
  • the camera collects multiple images, and the mobile phone can perform face-lift processing on the faces in each image to obtain multiple images after face-lift processing.
  • the slow motion and the video are also composed of a frame of images, and the face contained in each frame of the image can be thinned, and then the thinned image is composed of new Slow motion and new videos.
  • the face-lifting method provided in this application may also be used to perform face-lift processing on the human face in the image.
  • the photographing interface 901 of the mobile phone includes a control instructing to enter the portrait mode.
  • the mobile phone enters the portrait mode, and the portrait mode photographing interface 902 shown in (2) in FIG. 9 is displayed.
  • the portrait mode photographing interface 902 may display a control 903 indicating that the thin face function is turned on.
  • the mobile phone detects that the user clicks on the control 903 indicating to enable the thin face function on the portrait mode photographing interface 902, the mobile phone enables the thin face function.
  • FIG. 9 it is a schematic diagram of another photographing interface 904 of a mobile phone.
  • the photographing interface 904 is displayed with a control 905 for turning on the thin face function (which may also be referred to as indicating to enter the thin face mode).
  • the phone when the phone is in camera mode or video mode, after detecting that the user clicks on the control 905 to enable the thin face function, or after detecting the user's gesture of swiping left (or right) on the viewfinder, the phone enables the thin face function.
  • the photographing interface 906 in the face-saving mode shown in (4) in FIG. 9 is displayed.
  • FIG. 10 it is a schematic diagram of another photographing interface 1001 of a mobile phone.
  • a photographing setting control 1002 is displayed on the photographing interface 1001.
  • the mobile phone displays the photographing setting interface 1002 as shown in (2) in FIG.
  • the photograph setting interface 1002 includes a control 1003 that enables the thin face function.
  • the mobile phone starts the thin face function, that is, the photographing interface 1004 shown in (3) in FIG. 10 is displayed.
  • FIG. 10 a schematic diagram of another camera setting interface 1005 for a mobile phone.
  • the photographing setting interface 1005 includes a control 1006 indicating to enter the portrait mode.
  • the mobile phone enters the portrait mode photographing mode, that is, the portrait mode photographing interface 1007 shown in (5) in FIG. 10 is displayed.
  • the portrait mode photographing interface 1007 a control 1008 for enabling the thin face function is displayed.
  • the mobile phone detects that the user clicks on the control 1008 for turning on the thin face function on the portrait mode photographing interface 1007, the mobile phone starts the thin face function.
  • the mobile phone may enable the thin face function.
  • the predefined gesture may be, for example, a gesture of sliding up in the viewfinder, etc., and the mobile phone enters as shown in FIG. 11 ( 2)
  • the portrait mode photographing interface 1102 shown, or the mobile phone directly starts the thin face function, and enters the photographing interface 1103 shown in (3) in FIG. 11. among them.
  • the portrait mode camera interface 1102 displays a control for enabling the thin face function, and the thin face function can be enabled by the control for enabling the thin face function.
  • the first camera of the mobile phone when the mobile phone is in a photographing mode, a portrait mode, or a video recording mode, can collect RGB images in real time, and display the collected RGB images in the viewfinder.
  • the mobile phone can perform face recognition on the RGB image collected by the first camera, and when it is determined that there is a face in the collected RGB image, the user can be prompted to enable the thin face function.
  • the mobile phone may prompt the user to enable the thin face function through text or voice, and then the user may enable the thin face function through any of the above methods.
  • the mobile phone may display a control for turning on the thin face function in the current interface (the camera interface in the camera mode or the portrait mode or the video mode). When it is detected that the user clicks on the control for enabling the thin face function, the mobile phone enables the thin face function.
  • the displayed control for turning on the thin face function may be the control for turning on the thin face function 1202 in the photographing interface 1201 shown in (1) in FIG. 12. It may also be a control 1204 for enabling the thin-face function of the photographing interface 1203 terminal shown in (2) in FIG. 12.
  • the time stamps of the collected RGB image and depth image need to be synchronized, that is to say, the collected RGB image and depth image should be taken at the same time or within a very short period of time. This is because during the shooting process, the images collected by the first camera and the second camera change in real time. Only the RGB images and depth images collected at the same time or a very short period of time can be considered two The images are taken from the same image, and it is accurate to use a time-stamped RGB image and a depth image to construct a 3D face model.
  • the face feature points determined from the RGB image are input into a standard face 3D model, for example, a three-dimensional deformation model (3DMM), to obtain a rough face 3D model corresponding to the target face.
  • a standard face 3D model for example, a three-dimensional deformation model (3DMM)
  • the standard 3D model of human face is a 3D model of a human face obtained by inputting a large number of images of human faces as a training set into a machine learning model for training.
  • the standard human face 3D model represents the average level of human faces.
  • the model contains multiple points (these points have three-dimensional coordinates) and lines formed between the points.
  • some bone points can be defined among these points in the standard face 3D model, and the points and lines corresponding to the specific area can be determined according to the bone points.
  • the points and lines where the nasal bones are located can be defined in the standard face 3D model in advance, and the points and lines corresponding to the nose bridge area and the alar wing area are defined according to the points and lines where the nose bones are located.
  • the points of the eyebrow arch, cheekbones, and chin can be defined in advance on a standard 3D model of the face.
  • the inner contour line can be defined as the line segment in the eyebrow arch with the eyebrow peak vertically down and the same horizontal position as the cheekbone.
  • the mandibular line is the line segment from the end of the inner contour line to the same horizontal position of the chin depression.
  • the outer contour line is the outer contour line of the standard 3D model of the face.
  • the area between the left inner contour line and the left mandible line and the left outer contour line, as well as the area between the right inner contour line and the right mandible line and the right outer contour line can be defined as the outer contour area, and further determine the corresponding Points and lines.
  • the point of the frontal mound can be defined in a standard face 3D model in advance. Further, according to the points of the frontal mound, define the points and lines corresponding to the forehead area.
  • the face feature points are included in the points in the standard face 3D model. Therefore, inputting the face feature points determined by the RGB image of the target face into the standard face 3D model can be understood as transforming the standard face 3D model according to the determined face feature points, To obtain a rough 3D model of the face corresponding to the target face. For example: according to the RGB image to determine the face feature points, the distance between the corners of the two eyes in the face is 15 cm, while in the standard face 3D model the distance between the corners of the two eyes is 12 cm. Therefore, the standard The face 3D model is deformed to obtain a face 3D model corresponding to the target face. In the face 3D model corresponding to the target face, the distance between the corners of the two eyes is 15 cm.
  • the face feature points are data obtained from RGB images, only two-dimensional data is included. Each point in the standard 3D model of the face is three-dimensional data. Therefore, it is necessary to further deform the face 3D model corresponding to the rough target face according to the depth data in the depth image.
  • the depth image is filtered, hole-filled, etc., and then fused with the face 3D model corresponding to the rough target face obtained previously to obtain an accurate face 3D model corresponding to the target face.
  • a specific area in the target face can be determined according to the predefined bone points and areas, such as the nose bridge area, the nose wing area, the outer contour area, and the forehead area. Further, these specific areas can be processed.
  • the specific area includes area 1 and area 2, wherein area 1 is an area that needs to brighten skin color, such as: nose bridge area, forehead area, etc.
  • Area 2 is an area that needs to reduce the brightness of skin color, such as an outer contour area and a nose area.
  • the brightness of the skin color can be understood as the brightness of the image.
  • the color/color of an image is expressed by both brightness and chroma.
  • Chroma is the nature of a color excluding brightness, which reflects the hue and saturation of the color, and brightness refers to the brightness of the color. Therefore, the adjustment of the lightness and darkness of the skin color of a specific area includes the processing of brightness and/or chroma.
  • the color of each pixel is obtained by superimposing the R value, G value, and B value.
  • the value of pixels in the original image is f(i, j), where (i, j) represents the spatial position of the pixel.
  • the value of the converted pixel is g(i, j).
  • the coefficient a affects the contrast of the image
  • the coefficient b affects the brightness of the image.
  • the brightness of the image can be adjusted by adjusting the a value.
  • the coefficient b (usually greater than 0) affects the brightness of the image. Therefore, in other embodiments of the present application, the b value can be increased to make the image brighter. It is also possible to make the image darker by reducing the b value.
  • the overall lightness and darkness of the image is a combined effect of the brightness and chroma of the image, so when adjusting the lightness and darkness of the image, it is not excluded to adjust other parameters of the image.
  • the color of each pixel is obtained by superposing the hue (Hue, H) value, saturation (Saturation, S), and lightness (Lightness, L).
  • the L value is usually used to reflect the brightness of the image.
  • the brightness of the image can be adjusted by adjusting the size of the L value. The L value becomes larger and the image becomes brighter. The L value becomes smaller and the image becomes darker.
  • the overall lightness and darkness of the image is a combined effect of the brightness and chroma of the image, so when adjusting the lightness and darkness of the image, it is not excluded to adjust other parameters of the image.
  • the RGB color model and the HSL color model can be converted to each other, if the image uses the RGB color model, the image can also be converted to the HSL color model, after adjusting the brightness using the HSL brightness adjustment method, then convert back to RGB Color model.
  • the effect of adjusting the brightness of the RGB image can also be achieved, which is not limited in the embodiments of the present application.
  • Y represents brightness (Luminance, Luma)
  • U represents chroma (Chrominance)
  • V represents concentration (Chroma).
  • the Y value is usually used to reflect the brightness of the image.
  • the brightness of the image can be adjusted by adjusting the Y value. The Y value becomes larger and the image becomes brighter. The Y value becomes smaller and the image becomes darker.
  • the overall lightness and darkness of the image is a combined effect of the brightness and chroma of the image, so when adjusting the lightness and darkness of the image, it is not excluded to adjust other parameters of the image.
  • image processing techniques such as filters or alpha fusion techniques may be used.
  • the embodiments of the present application do not limit specific processing methods.
  • the filter can include adjusting the chroma, brightness, color equality, and can also include superimposed textures, etc., by adjusting the chroma and hue, you can specifically adjust a certain color system to make it darker, lighter, or change the hue , While the other colors are unchanged.
  • the filter can also be understood as a kind of pixel-to-pixel mapping. The pixel value of the input image is mapped to the pixel value of the target pixel through a preset mapping table, so as to achieve special effects. It should be understood that the color-related parameters in the filter can be set following the adjustment method mentioned above.
  • RGB images the higher the R value, G value, and B value, the higher the brightness of the image.
  • the original image of the area 1 may be fused with the highlight template to increase the transparency of the area 1 and brighten the skin color, that is, increase the brightness value of each pixel in the area 1.
  • the original image of the area 2 is merged with the shadow template to reduce the transparency of the area 2 and the brightness of the skin color, that is, to reduce the brightness value of each pixel in the area 2.
  • RGB values including R value, G value, and B value
  • Equation 1 The color values (ie RGB values, including R value, G value, and B value) of each pixel in the image after specific fusion can be obtained according to Equation 1, as follows:
  • OutPutColor is the color value of the fused image
  • RGBsrc is the color value of the original image (RGB image)
  • Ksrc is the fusion coefficient of the original image
  • RGBsrc is the color value of the highlight template or shadow template used
  • Kdst is the fusion coefficient of the highlight template or shadow template.
  • Ksrc is inversely proportional to Kdst, and is related to the set face-lifting level.
  • RGBsrc is the color value of the specular template used
  • the specular template has a larger color value than the original image, that is, a larger brightness value. Therefore, after using Formula 1, the color value of each pixel in the fused image becomes larger, the brightness value becomes larger, and the fused image becomes brighter.
  • Kdst is proportional to the set thin face level
  • Ksrc is inversely proportional to the set thin face level. That is to say, when the set thin face level is higher, the fused image is brighter.
  • RGBsrc is the color value of the shadow template used
  • the shadow template has a smaller color value than the original image, that is, the brightness value is smaller. Therefore, after using Formula 1, the color value of each pixel in the fused image becomes smaller, the brightness value becomes smaller, and the fused image becomes darker.
  • Kdst is proportional to the set thin face level
  • Ksrc is inversely proportional to the set thin face level. That is to say, when the set thin face level is higher, the fused image is darker.
  • the mobile phone after entering the portrait photographing mode, the mobile phone takes a picture when the face-lifting function is not turned on, to obtain an image 6, which can be regarded as an original image without face-lifting processing. After that, the face-lift function is turned on, the rest of the settings of the mobile phone are unchanged, and the location and environment of the subject are not changed.
  • the phone takes another picture to obtain image 7, which is processed by the face-lift method provided by the embodiment of the present application After the image.
  • the above-mentioned alpha fusion technology may be used to process the original image collected during the photographing.
  • the RGB value of the area 1 (for example, the nose bridge area and the forehead area) in the image can be increased to increase the brightness value of the area 1 in the image to achieve the effect of increasing the brightness of the area 1.
  • the brightness value of the area 2 in the image can be reduced by reducing the RGB value of the area 2 in the image 7 (for example, the nose area and the outer contour area) to reduce the brightness of the area 2.
  • Comparing image 7 and image 6 shows that: the brightness value of area 1 in image 7 is greater than the brightness value of area 1 in image 6; the brightness value of area 2 in image 7 is less than the brightness value of area 2 in image 6; The brightness value is equal to (or not much different from) the brightness value of the corresponding area in the image 6.
  • the above-mentioned alpha fusion technology adjusts the brightness of a specific area by changing the RGB value. Due to the lightness and darkness of the image, it is a combined effect of the brightness and chroma of the image. Therefore, when adjusting the brightness of the image, the chromaticity and the like of the image may be changed, but the change amount of the chromaticity and the like is not large.
  • the image 7 is converted from RGB space to HSL space (or YUV space)
  • HSL space or YUV space
  • the brightness value of area 1 in image 7-L value (Or Y value) becomes larger, and other parameter values of area 1 do not change or change slightly.
  • the brightness value-L value (or Y value) of area 2 becomes smaller, and other parameter values of area 2 do not change or change slightly.
  • the parameter values in other areas have not changed or have changed slightly.
  • the above modules or units may be implemented by software, hardware, or a combination of both.
  • the software exists in the form of computer program instructions and is stored in the memory, and the processor may be used to execute the program instructions and implement the above method flow.
  • the processor may include, but is not limited to, at least one of the following: central processing unit (central processing unit, CPU), microprocessor, digital signal processor (DSP), microcontroller (microcontroller unit, MCU), or artificial intelligence
  • CPU central processing unit
  • DSP digital signal processor
  • microcontroller microcontroller unit, MCU
  • the processor can be built in an SoC (system on chip) or application specific integrated circuit (ASIC), or it can be an independent semiconductor chip.
  • SoC system on chip
  • ASIC application specific integrated circuit
  • the processor processes the core used to execute software instructions for calculation or processing, and may further include necessary hardware accelerators, such as field programmable gate array (field programmable gate array (FPGA), PLD (programmable logic device) Or a logic circuit that implements dedicated logic operations.
  • FPGA field programmable gate array
  • PLD programmable logic device
  • the hardware may be a CPU, microprocessor, DSP, MCU, artificial intelligence processor, ASIC, SoC, FPGA, PLD, dedicated digital circuit, hardware accelerator, or non-integrated discrete device Any one or any combination of them, it can run the necessary software or does not depend on the software to perform the above method flow.
  • the functional units in the embodiments of the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or software function unit.
  • the integrated unit may be stored in a computer-readable storage medium.
  • the technical solutions of the embodiments of the present application may essentially be part of or contribute to the existing technology or all or part of the technical solutions may be embodied in the form of software products, and the computer software products are stored in a storage
  • the medium includes several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) or processor to perform all or part of the steps of the methods described in the embodiments of the present application.
  • the foregoing storage media include: flash memory, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un procédé de traitement d'image et un dispositif électronique, se rapportant au domaine technique des communications, et étant capable d'améliorer visuellement la sensation stéréoscopique d'un visage au moyen d'un changement du degré de luminosité d'une région spécifique dans le visage dans une image, permettant d'obtenir l'effet d'amincissement du visage, et d'améliorer l'expérience de l'utilisateur. Ce procédé comprend les étapes suivantes : lors de la détection d'une première opération, un dispositif électronique met en marche un appareil photo et affiche une interface de photographie; lors de la détection d'une seconde opération, active une première fonction; affiche une première image dans le cadre d'un viseur, et détermine une première région et une seconde région dans la face d'un objet photographique cible; et lors de la détection d'une troisième opération, effectuer une photographie pour générer une deuxième image, dans laquelle la valeur de luminosité de la première région du visage dans la deuxième image est supérieure à la valeur de luminosité de la première région du visage dans une image originale, et la valeur de luminosité de la deuxième région du visage dans la deuxième image est inférieure à la valeur de luminosité de la deuxième région du visage dans l'image originale, et la valeur de luminosité d'une troisième région du visage dans la deuxième image et la valeur de luminosité de la troisième région du visage dans l'image originale restent inchangées.
PCT/CN2019/122837 2018-12-17 2019-12-04 Procédé de traitement d'image et dispositif électronique WO2020125410A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811544123.6A CN111327814A (zh) 2018-12-17 2018-12-17 一种图像处理的方法及电子设备
CN201811544123.6 2018-12-17

Publications (1)

Publication Number Publication Date
WO2020125410A1 true WO2020125410A1 (fr) 2020-06-25

Family

ID=71100732

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/122837 WO2020125410A1 (fr) 2018-12-17 2019-12-04 Procédé de traitement d'image et dispositif électronique

Country Status (2)

Country Link
CN (1) CN111327814A (fr)
WO (1) WO2020125410A1 (fr)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112053389A (zh) * 2020-07-28 2020-12-08 北京迈格威科技有限公司 人像处理方法、装置、电子设备及可读存储介质
CN112492205A (zh) * 2020-11-30 2021-03-12 维沃移动通信(杭州)有限公司 图像预览方法、装置和电子设备
CN113421211A (zh) * 2021-06-18 2021-09-21 Oppo广东移动通信有限公司 光斑虚化的方法、终端设备及存储介质
CN113627328A (zh) * 2021-08-10 2021-11-09 安谋科技(中国)有限公司 电子设备及其图像识别方法、片上系统和介质
CN114625292A (zh) * 2020-11-27 2022-06-14 华为技术有限公司 图标设置方法和电子设备
CN114827442A (zh) * 2021-01-29 2022-07-29 华为技术有限公司 生成图像的方法和电子设备
CN115484393A (zh) * 2021-06-16 2022-12-16 荣耀终端有限公司 一种异常提示方法及电子设备
CN116052236A (zh) * 2022-08-04 2023-05-02 荣耀终端有限公司 人脸检测处理引擎、涉及人脸检测的拍摄方法及设备
CN116363538A (zh) * 2023-06-01 2023-06-30 贵州交投高新科技有限公司 一种基于无人机的桥梁检测方法及系统
CN117119291A (zh) * 2023-02-06 2023-11-24 荣耀终端有限公司 一种出图模式切换方法和电子设备

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111988511B (zh) * 2020-08-31 2021-08-27 展讯通信(上海)有限公司 可穿戴设备及其图像信号处理装置
CN112465910B (zh) * 2020-11-26 2021-12-28 成都新希望金融信息有限公司 一种目标拍摄距离获取方法、装置、存储介质及电子设备
CN113099107B (zh) * 2021-02-26 2022-06-03 无锡闻泰信息技术有限公司 基于终端的视频拍摄方法、装置、介质及计算机设备
CN113486714B (zh) * 2021-06-03 2022-09-02 荣耀终端有限公司 一种图像的处理方法及电子设备
CN113923372B (zh) * 2021-06-25 2022-09-13 荣耀终端有限公司 曝光调整方法及相关设备
CN113891009B (zh) * 2021-06-25 2022-09-30 荣耀终端有限公司 曝光调整方法及相关设备
CN113473013A (zh) * 2021-06-30 2021-10-01 展讯通信(天津)有限公司 图像美化效果的显示方法、装置和终端设备
CN113435445A (zh) * 2021-07-05 2021-09-24 深圳市鹰硕技术有限公司 图像过优化自动纠正方法以及装置
CN114429506B (zh) * 2022-01-28 2024-02-06 北京字跳网络技术有限公司 图像处理方法、装置、设备、存储介质和程序产品
CN116048323B (zh) * 2022-05-27 2023-11-24 荣耀终端有限公司 图像处理方法及电子设备
CN115640414B (zh) * 2022-08-10 2023-09-26 荣耀终端有限公司 图像的显示方法及电子设备
CN115767290B (zh) * 2022-09-28 2023-09-29 荣耀终端有限公司 图像处理方法和电子设备
CN115767258A (zh) * 2022-10-21 2023-03-07 北京达佳互联信息技术有限公司 一种拍摄过程中的对象调整方法、装置、电子设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090284621A1 (en) * 2008-05-15 2009-11-19 Samsung Electronics Co., Ltd. Digital camera personalization
CN104992402A (zh) * 2015-07-02 2015-10-21 广东欧珀移动通信有限公司 一种美颜处理方法及装置
CN106998423A (zh) * 2016-01-26 2017-08-01 宇龙计算机通信科技(深圳)有限公司 图像处理方法及装置
CN107038680A (zh) * 2017-03-14 2017-08-11 武汉斗鱼网络科技有限公司 自适应光照的美颜方法及系统
CN107592457A (zh) * 2017-09-08 2018-01-16 维沃移动通信有限公司 一种美颜方法和移动终端
CN108320266A (zh) * 2018-02-09 2018-07-24 北京小米移动软件有限公司 一种生成美颜图片的方法和装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5335452B2 (ja) * 2009-01-27 2013-11-06 キヤノン株式会社 撮像装置、撮像装置の制御方法及びプログラム
CN103607537B (zh) * 2013-10-31 2017-10-27 北京智谷睿拓技术服务有限公司 相机的控制方法及相机
CN107730445B (zh) * 2017-10-31 2022-02-18 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质和电子设备
CN108846807B (zh) * 2018-05-23 2021-03-02 Oppo广东移动通信有限公司 光效处理方法、装置、终端及计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090284621A1 (en) * 2008-05-15 2009-11-19 Samsung Electronics Co., Ltd. Digital camera personalization
CN104992402A (zh) * 2015-07-02 2015-10-21 广东欧珀移动通信有限公司 一种美颜处理方法及装置
CN106998423A (zh) * 2016-01-26 2017-08-01 宇龙计算机通信科技(深圳)有限公司 图像处理方法及装置
CN107038680A (zh) * 2017-03-14 2017-08-11 武汉斗鱼网络科技有限公司 自适应光照的美颜方法及系统
CN107592457A (zh) * 2017-09-08 2018-01-16 维沃移动通信有限公司 一种美颜方法和移动终端
CN108320266A (zh) * 2018-02-09 2018-07-24 北京小米移动软件有限公司 一种生成美颜图片的方法和装置

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112053389A (zh) * 2020-07-28 2020-12-08 北京迈格威科技有限公司 人像处理方法、装置、电子设备及可读存储介质
CN114625292A (zh) * 2020-11-27 2022-06-14 华为技术有限公司 图标设置方法和电子设备
CN112492205A (zh) * 2020-11-30 2021-03-12 维沃移动通信(杭州)有限公司 图像预览方法、装置和电子设备
CN112492205B (zh) * 2020-11-30 2023-05-09 维沃移动通信(杭州)有限公司 图像预览方法、装置和电子设备
CN114827442B (zh) * 2021-01-29 2023-07-11 华为技术有限公司 生成图像的方法和电子设备
CN114827442A (zh) * 2021-01-29 2022-07-29 华为技术有限公司 生成图像的方法和电子设备
CN115484393A (zh) * 2021-06-16 2022-12-16 荣耀终端有限公司 一种异常提示方法及电子设备
CN115484393B (zh) * 2021-06-16 2023-11-17 荣耀终端有限公司 一种异常提示方法及电子设备
CN113421211A (zh) * 2021-06-18 2021-09-21 Oppo广东移动通信有限公司 光斑虚化的方法、终端设备及存储介质
CN113421211B (zh) * 2021-06-18 2024-03-12 Oppo广东移动通信有限公司 光斑虚化的方法、终端设备及存储介质
CN113627328A (zh) * 2021-08-10 2021-11-09 安谋科技(中国)有限公司 电子设备及其图像识别方法、片上系统和介质
CN116052236A (zh) * 2022-08-04 2023-05-02 荣耀终端有限公司 人脸检测处理引擎、涉及人脸检测的拍摄方法及设备
CN117119291A (zh) * 2023-02-06 2023-11-24 荣耀终端有限公司 一种出图模式切换方法和电子设备
CN116363538A (zh) * 2023-06-01 2023-06-30 贵州交投高新科技有限公司 一种基于无人机的桥梁检测方法及系统

Also Published As

Publication number Publication date
CN111327814A (zh) 2020-06-23

Similar Documents

Publication Publication Date Title
WO2020125410A1 (fr) Procédé de traitement d'image et dispositif électronique
WO2021052232A1 (fr) Procédé et dispositif de photographie à intervalle de temps
US11223772B2 (en) Method for displaying image in photographing scenario and electronic device
CN112262563B (zh) 图像处理方法及电子设备
WO2021057277A1 (fr) Procédé de photographie dans une lumière sombre et dispositif électronique
EP4050883A1 (fr) Procédé de photographie et dispositif électronique
WO2020029306A1 (fr) Procédé de capture d'image et dispositif électronique
WO2022017261A1 (fr) Procédé de synthèse d'image et dispositif électronique
WO2022022731A1 (fr) Procédé de remplissage de lumière dans la photographie et appareil connexe
WO2023015991A1 (fr) Procédé de photographie, dispositif électronique, et support de stockage lisible par ordinateur
WO2022100685A1 (fr) Procédé de traitement de commande de dessin et dispositif associé
CN113973189B (zh) 显示内容的切换方法、装置、终端及存储介质
EP3873084A1 (fr) Procédé de photographie d'image à longue exposition et dispositif électronique
EP4361954A1 (fr) Procédé de reconstruction d'objet et dispositif associé
WO2023131070A1 (fr) Procédé de gestion de dispositif électronique, dispositif électronique et support de stockage lisible
CN113935898A (zh) 图像处理方法、系统、电子设备及计算机可读存储介质
WO2021057626A1 (fr) Procédé de traitement d'image, appareil, dispositif et support de stockage informatique
US20230162529A1 (en) Eye bag detection method and apparatus
CN112449101A (zh) 一种拍摄方法及电子设备
WO2022033344A1 (fr) Procédé de stabilisation vidéo, dispositif de terminal et support de stockage lisible par ordinateur
WO2021204103A1 (fr) Procédé de prévisualisation d'images, dispositif électronique et support de stockage
CN114115617A (zh) 一种应用于电子设备的显示方法及电子设备
WO2024114257A1 (fr) Procédé de génération d'effet dynamique de transition et dispositif électronique
WO2024078275A1 (fr) Appareil et procédé de traitement d'image, dispositif électronique et support de stockage
WO2023036084A1 (fr) Procédé de traitement d'image et appareil associé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19897950

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19897950

Country of ref document: EP

Kind code of ref document: A1