WO2020102978A1 - Procédé de traitement d'image et dispositif électronique - Google Patents

Procédé de traitement d'image et dispositif électronique

Info

Publication number
WO2020102978A1
WO2020102978A1 PCT/CN2018/116443 CN2018116443W WO2020102978A1 WO 2020102978 A1 WO2020102978 A1 WO 2020102978A1 CN 2018116443 W CN2018116443 W CN 2018116443W WO 2020102978 A1 WO2020102978 A1 WO 2020102978A1
Authority
WO
WIPO (PCT)
Prior art keywords
light effect
electronic device
light
picture
effect template
Prior art date
Application number
PCT/CN2018/116443
Other languages
English (en)
Chinese (zh)
Inventor
王习之
刘昆
李阳
吴磊
杜成
王强
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2018/116443 priority Critical patent/WO2020102978A1/fr
Priority to CN201880094372.1A priority patent/CN112262563B/zh
Publication of WO2020102978A1 publication Critical patent/WO2020102978A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present application relates to the field of image processing, in particular to an image processing method and electronic equipment.
  • the phone can provide multiple shooting modes: portrait shooting mode, large aperture shooting mode, night scene shooting mode, etc.
  • portrait shooting mode the mobile phone can provide a variety of light effect templates.
  • Different light effect templates represent (or correspond to) different light effect parameters, such as light source position, layer fusion parameter, texture pattern projection position, projection direction, etc.
  • the user can select different light effect templates to make the photos taken show different effects.
  • users often need to go through multiple attempts to find a suitable light effect template, and the user is cumbersome to operate and the use efficiency of mobile phones is low.
  • the embodiments of the present application provide an image processing method and an electronic device, which can enable a user to quickly select a suitable light effect template and reduce user operations.
  • an embodiment of the present application provides a photographing method, including: an electronic device turning on a camera to collect an image of a photographed object; the electronic device displays a first user interface; wherein, the first user interface includes: a first display area , Shooting mode list, light effect template option bar; the above shooting mode list includes one or more shooting mode options, the one or more shooting modes include the first shooting mode, the first shooting mode has been selected, the above A shooting mode is a shooting mode that highlights the people included in the captured picture, and the above light effect template option bar includes options of two or more light effect templates; the above light effect template includes one or more light effect parameters, Used to process pictures taken in the first shooting mode; the electronic device displays the image collected by the camera in the first display area; the electronic device highlights the light effect matching the shooting scene in the light effect template option bar Options for the template; wherein, the shooting scene is a shooting scene corresponding to the image displayed in the first display area.
  • the above-mentioned first display area may be referred to as a framing frame.
  • the above-mentioned first shooting mode may be referred to as a portrait shooting mode.
  • the above light effect template includes one or more of the following light effect parameters: the fusion parameter of the diffuse reflection layer, the highlight layer and the shadow layer, the background part of the RGB image and the overall light effect rendering Fusion parameters of the projected texture layer in the background, the color (pixel value) of the projected texture, the stretch value of the projected texture, the position of the projected texture pattern, the direction of projection, the projected texture layer of the portrait, and the rendering effect of the face light And the fusion parameters of the face part in the RGB image.
  • the electronic device can intelligently identify the current shooting scene when taking a photograph in the first shooting mode, and recommend a light effect template matching the current shooting scene for the user according to the shooting scene, which can enable the user to quickly Choose a suitable light effect template to reduce user operations and improve the efficiency of electronic devices.
  • the first user interface further includes a shooting control and a first control; after the electronic device highlights the option of the light effect template matching the shooting scene in the light effect template option bar, the above method It also includes: after detecting the user operation acting on the shooting control, the electronic device uses the light effect parameters corresponding to the selected light effect template to process the captured picture to generate a first picture; A control displays a thumbnail of the first picture; wherein, the thumbnail of the first picture contains fewer pixels than the first picture.
  • the selected light effect template is the light effect template matching the shooting scene.
  • the user may select a light effect template recommended by the electronic device that matches the shooting scene, and adopting the light effect template to process the picture may make the shooting effect of the obtained picture better.
  • the above uses the light effect parameters corresponding to the selected light effect template to process the captured picture to generate the first picture, which includes: the electronic device uses the selected light effect template corresponding to the Light effect parameters, light direction and depth data process the captured picture to generate a first picture; wherein the light direction is the light direction identified from the picture displayed in the first display area, and the depth data is the Depth data.
  • the technical solution provided by the embodiment of the present application can process the captured picture according to the actual lighting direction in the shooting scene, so that the light effect applied later does not conflict with the original lighting of the picture, and the shadow caused by occlusion is rendered, especially the eye socket
  • the shadow cast by the light from the nose part greatly enhances the three-dimensional sense of the face.
  • the method further includes: The light effect parameter corresponding to the selected light effect template and the depth data respectively process the portrait part and the background part; wherein the portrait part and the background part are obtained by segmenting the captured picture.
  • the technical solution provided by the embodiment of the present application can render the portrait part and the background part separately, so that the light effect fluctuates on the portrait, increasing the realism and three-dimensional sense of the picture.
  • the options for highlighting the light effect template matching the shooting scene in the above light effect template option bar include one or more of the following: the first display in the above light effect template option bar The location shows the options of the light effect template matching the shooting scene; highlights the options of the light effect template matching the shooting scene in the light effect template option bar; dynamically displays the shooting scenes in the light effect template option bar Options for matching light effect templates.
  • the embodiments of the present application provide a variety of ways to highlight the options of the light effect template matching the shooting scene, through which the user can more quickly and intuitively find the light effect template suitable for the current shooting scene, reduce user operations, and improve the Use efficiency.
  • the method further includes: the electronic device detects a first user operation acting on the first control, In response to the first user operation, the electronic device displays a second user interface for viewing the first picture.
  • the above-mentioned first user operation may be a click operation.
  • the technical solution provided by the embodiment of the present application may cause the electronic device to display a second user interface for viewing the first picture by clicking the first control.
  • the second user interface includes: a second display area and a second control; wherein: the second display area is used to display the first picture; the method further includes: the electronic device detects In response to the second user operation of the second control, the electronic device displays a second user interface for editing the first picture.
  • the above-mentioned second user operation may be a click operation.
  • the technical solution provided by the embodiment of the present application can cause the electronic device to display a second user interface for editing the first picture by clicking the second control, and the user can edit the light effect of the first picture.
  • the technical solution can improve the interaction between users and electronic devices.
  • the second user interface further includes: a light source indicator; wherein the light source indicator is used to indicate a lighting direction of the light source in the shooting scene; the method further includes: the electronic device detects an effect In the third user operation of the light source indicator, in response to the third user operation, update the light direction, and re-execute the light effect parameter, light direction and depth data corresponding to the selected light effect template of the electronic device Steps to process the captured pictures.
  • the third user operation may be a sliding operation.
  • the technical solution provided by the embodiment of the present application can change the illumination direction of the light source by sliding the light source indicator, so that the electronic device processes the captured picture according to the new illumination direction.
  • the technical solution can improve the interaction between users and electronic devices.
  • the second user interface further includes: a light intensity indicator; wherein the light intensity indicator is used to indicate the light intensity of the light source; the method further includes: the electronic device detects The fourth user operation of the light intensity indicator, in response to the fourth user operation, updates the light source intensity, and uses the light effect parameters, light direction, light source intensity and depth data corresponding to the selected light effect template to shoot For processing.
  • the above-mentioned fourth user operation may be a sliding operation for increasing or decreasing the light intensity.
  • the fourth user operation may be a user operation of sliding left or sliding right.
  • the fourth user operation may be a user operation of sliding up or sliding down.
  • the fourth user operation may be a click operation.
  • the technical solution provided by the embodiment of the present application can change the light intensity of the light source through the fourth user operation on the light intensity indicator, so that the electronic device processes the captured picture according to the new light intensity.
  • the technical solution can improve the interaction between users and electronic devices.
  • the second user interface further includes the light effect template option bar; the method further includes: the electronic device detects a fifth user operation acting on the light effect template option bar, and responds to the above The fifth user operation is to update the selected light effect template and re-execute the step of processing the captured picture by the electronic device using the light effect parameters, illumination directions and depth data corresponding to the selected light effect template.
  • the above-mentioned fifth user operation may be a click operation on a light effect template option included in the light effect template option bar, so that the electronic device performs the captured picture according to the light effect parameters corresponding to the new light effect template deal with.
  • the technical solution can improve the interaction between users and electronic devices.
  • an embodiment of the present application provides an electronic device, including: one or more processors, memory, one or more cameras, and a touch screen; the memory, the one or more cameras, the touch screen, and the one or more Multiple processors are coupled, the memory is used to store computer program code, the computer program code includes computer instructions, the one or more processors call the computer instructions to execute: turn on the camera to collect images of the shooting object, and display the first user Interface; wherein, the above first user interface includes: a first display area, a list of shooting modes, an option bar of a light effect template; the above list of shooting modes includes options of one or more shooting modes, and the one or more shooting modes include a first Shooting mode. The first shooting mode has been selected.
  • the first shooting mode is a shooting mode that highlights the people included in the captured picture.
  • the light effect template option bar includes two or more light effect templates. Options; the light effect template includes one or more light effect parameters for processing the pictures taken in the first shooting mode; displaying the image collected by the camera in the first display area; in the option bar of the light effect template Highlight the option of the light effect template matching the shooting scene; wherein the shooting scene is the shooting scene corresponding to the image displayed in the first display area.
  • the first user interface further includes a shooting control and a first control; after the processor highlights the option of the light effect template matching the shooting scene in the light effect template option bar, the above processing
  • the device also executes: after detecting the user operation acting on the above-mentioned shooting control, processing the captured picture using the light effect parameters corresponding to the selected light effect template to generate a first picture; displaying the above in the first control A thumbnail of the first picture; wherein the thumbnail of the first picture contains fewer pixels than the first picture.
  • the selected light effect template is the light effect template matching the shooting scene.
  • the processor uses the light effect parameters corresponding to the selected light effect template to process the captured picture, and specifically executes when generating the first picture: the processor uses the selected light effect The light effect parameter corresponding to the template, the light direction and the depth data process the captured picture to generate a first picture; wherein the light direction is the light direction identified from the picture displayed in the first display area, and the depth data is the above The depth data of the subject.
  • the processing The device further executes: separately processing the portrait part and the background part according to the light effect parameters corresponding to the selected light effect template and the depth data; wherein, the portrait part and the background part are obtained by dividing the captured picture.
  • the options for highlighting the light effect template matching the shooting scene in the above light effect template option bar include one or more of the following: the first display in the above light effect template option bar The location shows the options of the light effect template matching the shooting scene; highlights the options of the light effect template matching the shooting scene in the light effect template option bar; dynamically displays the shooting scenes in the light effect template option bar Options for matching light effect templates.
  • the processor further executes: detecting a first user operation acting on the first control, in response to the first A user operation, the electronic device displays a second user interface for viewing the first picture.
  • the second user interface includes: a second display area and a second control; wherein: the second display area is used to display the first picture; and the processor further executes: In response to the second user operation of the second control, the electronic device displays a second user interface for editing the first picture in response to the second user operation.
  • the second user interface further includes: a light source indicator; wherein the light source indicator is used to indicate a lighting direction of the light source in the shooting scene; the processor further executes: detecting that The third user operation of the light source indicator, in response to the third user operation, updates the light direction, and re-executes the picture taken using the light effect parameters, light direction, and depth data corresponding to the selected light effect template Be processed.
  • the second user interface further includes: a light intensity indicator; wherein the light intensity indicator is used to indicate the light intensity of the light source; and the processor further executes: detecting the effect on the light
  • the fourth user operation of the strong indicator in response to the fourth user operation, updates the light source intensity, and uses the light effect parameters, illumination directions, light source intensity, and depth data corresponding to the selected light effect template for the captured picture Be processed.
  • the second user interface further includes the light effect template option bar; the processor further executes: detecting a fifth user operation acting on the light effect template option bar, and responding to the fifth The user operates to update the selected light effect template, and re-executes the electronic device to process the captured picture using the light effect parameters, illumination directions, and depth data corresponding to the selected light effect template.
  • an embodiment of the present application provides a graphical user interface on an electronic device.
  • the electronic device has a touch screen, a camera, a memory, and a processor to execute a program stored in the memory.
  • the graphical user interface includes a A user interface, the first user interface includes: a first display area, a list of shooting modes, an option bar of a light effect template, the list of shooting modes includes options of one or more shooting modes, and the one or more shooting modes include a first Shooting mode.
  • the first shooting mode has been selected.
  • the first shooting mode is a shooting mode that highlights the people included in the captured picture.
  • the light effect template option bar includes two or more light effect templates.
  • the light effect template includes one or more parameters for processing pictures taken in the first shooting mode; wherein: the image captured by the camera is displayed in the first display area; in the option bar of the light effect template Highlight the option of the light effect template matching the shooting scene; wherein the shooting scene is the shooting scene corresponding to the image displayed in the first display area.
  • the options for highlighting the light effect template matching the shooting scene in the above light effect template option bar include one or more of the following: the first display in the above light effect template option bar The location shows the options of the light effect template matching the shooting scene; highlights the options of the light effect template matching the shooting scene in the light effect template option bar; dynamically displays the shooting scenes in the light effect template option bar Options for matching light effect templates.
  • the first user interface further includes a shooting control and a first control; wherein: in response to the detected user operation acting on the shooting control, a thumbnail of the first picture is displayed on the first Within a control; wherein, the first picture is a captured picture, and the thumbnail of the first picture contains fewer pixels than the first picture; in response to the detected user acting on the first control Operate to display a second user interface for viewing the first picture above.
  • the second user interface includes: a second display area and a second control; wherein the second display area is used to display the first picture; in response to the detected action on the second The user operation of the control displays a second user interface for editing the first picture above.
  • the second user interface further includes: a light source indicator, a light intensity indicator, and an option bar of the light effect template; wherein the light source indicator is used to indicate the light direction of the light source in the shooting scene , The light intensity indicator is used to indicate the light intensity of the light source; in response to the detected user operation acting on the light source indicator, the display position of the light source indicator and the picture displayed in the second display area are updated; In response to the detected user operation acting on the light intensity indicator, update the display of the light intensity indicator and the picture displayed in the second display area; in response to the detected user operation acting on the option bar of the light effect template, Update and display the picture displayed in the option bar of the light effect template and the second display area.
  • an embodiment of the present application provides a computer storage medium, including computer instructions, which when executed on an electronic device, causes the electronic device to perform the first aspect or any one of the first aspects of the embodiments of the present application Photographing methods provided by various implementation methods.
  • an embodiment of the present application provides a computer program product that, when the computer program product runs on an electronic device, causes the electronic device to perform the first aspect of the embodiment of the present application or any implementation manner of the first aspect Provide photographing method.
  • the electronic device provided in the second aspect provided above, the computer storage medium provided in the fourth aspect, and the computer program product provided in the fifth aspect are all used to execute the photographing method provided in the first aspect.
  • the beneficial effects that can be achieved refer to the beneficial effects in the photographing method provided in the first aspect, which will not be repeated here.
  • FIG. 1A is a schematic structural diagram of an electronic device provided by an embodiment of this application.
  • 1B is a schematic structural diagram of a 3D sensing module provided by an embodiment of this application.
  • 1C is a block diagram of a software structure of an electronic device provided by an embodiment of this application.
  • FIG. 2 is a schematic diagram of a user interface provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of another user interface involved in an embodiment of the present application.
  • 4 to 5 are schematic diagrams of an embodiment of a user interface provided by embodiments of the present application.
  • FIG. 6 is a schematic diagram of another embodiment of a user interface provided by an embodiment of the present application.
  • FIGS. 7-8 are schematic diagrams of another embodiment of a user interface provided by embodiments of the present application.
  • FIG. 9 is a schematic diagram of another user interface provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of another user interface provided by an embodiment of this application.
  • FIG. 11 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 12 is a schematic flowchart of a method for rendering a face light effect provided by an embodiment of the present application
  • FIG. 13 is a schematic diagram of a result of portrait segmentation provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of the facial feature segmentation result provided by an embodiment of the present application.
  • 15 is a schematic flowchart of an overall light effect rendering method provided by an embodiment of this application.
  • 16 to 22 are schematic diagrams of the flow of hardware driver interaction within the electronic device
  • 24 is a schematic diagram of a flow of hardware driver interaction within an electronic device.
  • An embodiment of the present application provides an image processing method, which can be applied to an electronic device to process a picture taken by a camera application.
  • the electronic device when the first shooting mode is turned on, can recommend a suitable light effect template to the user according to the shooting scene to reduce user operations and improve the use efficiency of the mobile phone. Further, the electronic device can also combine the depth data to perform light effect rendering on the picture taken by the camera application, so as to enhance the stereoscopic effect of the picture.
  • the electronic devices involved in the embodiments of the present application may be mobile phones, tablet computers, desktops, laptops, notebook computers, ultra-mobile personal computers (Ultra-mobile Personal Computer, UMPC), handheld computers, netbooks, personal digital assistants (Personal Digital Assistant (PDA), wearable electronic devices, virtual reality devices, etc.
  • Ultra-mobile Personal Computer Ultra-mobile Personal Computer
  • PDA Personal Digital Assistant
  • wearable electronic devices virtual reality devices, etc.
  • the first shooting mode the shooting mode set when the subject is a person to highlight the person and enhance the beauty of the person in the captured picture.
  • the electronic device can use a larger aperture to keep the depth of field shallower to highlight the person, and improve the color effect through a specific algorithm to optimize the person's skin color.
  • the electronic device can also turn on the flash to perform illumination compensation.
  • Electronic devices can provide a variety of shooting modes.
  • the shooting parameters such as aperture size, shutter speed, and sensitivity (International) in different shooting modes are different, and the processing algorithms for the pictures taken are also different.
  • the first shooting mode may be referred to as a portrait shooting mode. This application does not limit the naming of the first shooting mode.
  • Light effect template a collection of multiple light effect parameters that can be used to process the pictures taken by the user in the first shooting mode.
  • the light effect parameter set may include one or more of the following parameters: the fusion parameter of the diffuse reflection layer, the highlight layer and the shadow layer, the background part of the RGB image and the background in the overall light effect rendering The fusion parameters of the projected texture layer, the color (pixel value) of the projected texture, the stretch value of the projected texture, the location of the projection of the texture pattern, the projection direction, the projected texture layer of the portrait, the rendering effect of the face light effect and RGB The fusion parameters of the face part in the picture.
  • the parameters listed above are only exemplary descriptions.
  • the set of light effect parameters may further include other parameters, which is not limited in the embodiments of the present application.
  • the electronic device may provide two or more light effect templates in the first shooting mode, and different light effect templates correspond to different sets of light effect parameters. Using different light effect templates to process pictures, electronic devices can obtain pictures with different effects.
  • the light effect template may be a template such as soft light, theater light, church light, tree shadow light, window shadow light, and dual color light.
  • Light effect rendering a method of processing pictures, which can make the pictures show a three-dimensional effect.
  • the light effect rendering in the embodiment of the present application may include light effect rendering on the human face, or the light effect rendering in the embodiment of the present application may include light effect rendering on the human face and overall light effect rendering. The detailed process of light effect rendering can be described in the subsequent embodiments.
  • FIG. 1A shows a schematic structural diagram of the electronic device 10.
  • the electronic device 10 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , Mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headset interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display 194, user Identification module (subscriber identification module, SIM) card interface 195, and 3D sensing module 196, etc.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 10.
  • the electronic device 10 may include more or fewer components than shown, or combine some components, or split some components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), and a picture signal processor (image) signal processor (ISP), controller, memory, video codec, digital signal processor (DSP), baseband processor, and / or neural-network processing unit (NPU) Wait.
  • application processor application processor
  • AP application processor
  • modem processor graphics processor
  • GPU graphics processor
  • ISP picture signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • different processing units may be independent devices, or may be integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 10.
  • the controller can generate the operation control signal according to the instruction operation code and the timing signal to complete the control of fetching instructions and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory may store instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Avoid repeated access, reduce the waiting time of the processor 110, thus improving the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • Interfaces can include integrated circuit (inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit, sound, I2S) interface, pulse code modulation (pulse code modulation (PCM) interface, universal asynchronous transceiver (universal asynchronous) receiver / transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input / output (GPIO) interface, subscriber identity module (SIM) interface, and And / or universal serial bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input / output
  • SIM subscriber identity module
  • USB universal serial bus
  • the interface connection relationship between the modules illustrated in the embodiments of the present invention is only a schematic description, and does not constitute a limitation on the structure of the electronic device 10.
  • the electronic device 10 may also use different interface connection methods in the foregoing embodiments, or a combination of multiple interface connection methods.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 140 may receive wireless charging input through the wireless charging coil of the electronic device 10. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and / or the charging management module 140, and supplies power to the processor 110, internal memory 121, external memory, display screen 194, camera 193, wireless communication module 160, and the like.
  • the power management module 141 can also be used to monitor battery capacity, battery cycle times, battery health status (leakage, impedance) and other parameters.
  • the power management module 141 may also be disposed in the processor 110.
  • the power management module 141 and the charging management module 140 may also be set in the same device.
  • the wireless communication function of the electronic device 10 can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • Antenna 1 and antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 10 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 can provide a wireless communication solution including 2G / 3G / 4G / 5G and the like applied to the electronic device 10.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), and so on.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1 and filter, amplify, etc. the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor and convert it to electromagnetic wave radiation through the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110, and may be set in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (bluetooth, BT), and global navigation satellites that are applied to the electronic device 10 System (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency-modulates and filters electromagnetic wave signals, and transmits the processed signals to the processor 110.
  • the wireless communication module 160 may also receive the signal to be transmitted from the processor 110, frequency-modulate it, amplify it, and convert it to electromagnetic wave radiation through the antenna 2.
  • the antenna 1 of the electronic device 10 and the mobile communication module 150 are coupled, and the antenna 2 and the wireless communication module 160 are coupled so that the electronic device 10 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include a global mobile communication system (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband Wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long-term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and / or IR technology, etc.
  • the GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a beidou navigation system (BDS), and a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and / or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS beidou navigation system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the electronic device 10 realizes a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, connecting the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations, and is used for graphics rendering.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the GPU can be used to calculate the following aspects: the calculation of the highlight and diffuse reflection models in the face light rendering process; the occlusion relationship between the light source and each mesh patch; the highlight layer, the diffuse reflection layer and the shadow Layer fusion results; Gaussian blur of the background part of the RGB image during the overall light effect rendering; the projected texture coordinates of the vertices of each grid; portrait texture layers in the portrait area, face light rendering results, original RGB The result of the image fusion; the result of the fusion of the texture projection layer of the background in the background area and the background after the Gaussian blur.
  • the display screen 194 is used to display pictures, videos and the like.
  • the display screen 194 includes a display panel.
  • the display panel may use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light-emitting diode or an active matrix organic light-emitting diode (active-matrix organic light) emitting diode, AMOLED, flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diode (QLED), etc.
  • the electronic device 10 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the display screen 194 may be used to display pictures to be taken, pictures rendered with light effects, and the like.
  • the electronic device 10 can realize a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP processes the data fed back by the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, and the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, which is converted into a picture visible to the naked eye.
  • ISP can also optimize algorithms for noise, brightness and skin color of pictures. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be set in the camera 193.
  • the camera 193 is used to capture still pictures or videos.
  • the object generates an optical picture through the lens and projects it onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (charge coupled device, CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CCD charge coupled device
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital picture signal.
  • the ISP outputs the digital picture signal to the DSP for processing.
  • DSP converts digital picture signals into standard RGB, YUV and other format picture signals.
  • the electronic device 10 may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • the camera 193 is divided into two types, a front camera and a rear camera.
  • the front camera is a camera located on the front of the electronic device 10
  • the rear camera is a camera located
  • the digital signal processor is used to process digital signals. In addition to processing digital picture signals, it can also process other digital signals. For example, when the electronic device 10 is selected at a frequency point, the digital signal processor is used to perform Fourier transform on the energy at the frequency point.
  • Video codec is used to compress or decompress digital video.
  • the electronic device 10 may support one or more video codecs. In this way, the electronic device 10 can play or record videos in multiple encoding formats, such as: moving picture experts group (moving picture experts, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • moving picture experts group moving picture experts, MPEG
  • MPEG2 moving picture experts, MPEG2, MPEG3, MPEG4, etc.
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the NPU can realize applications such as intelligent cognition of the electronic device 10, for example: picture recognition, face recognition, voice recognition, text understanding, and the like.
  • the function of intelligently recognizing the shooting scene of the electronic device 10 can be realized by the NPU, and the function of the intelligent recognition of the illumination direction of the electronic device 10 can also be realized by the NPU.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 10.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes instructions stored in the internal memory 121 to execute various functional applications and data processing of the electronic device 10.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area may store an operating system, at least one function required application programs (such as sound playback function, image playback function, etc.).
  • the storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 10 and the like.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and so on.
  • the internal memory 121 can store pictures taken by the camera application, and can also be used to store a mapping relationship table between the shooting scene and the matching light effect template, and can also be used to store the recognition results of the shooting scene and the direction of the face lighting direction. Recognition results, portrait segmentation results, generated grid data, facial features segmentation results, etc.
  • the electronic device 10 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, a headphone interface 170D, and an application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and also used to convert analog audio input into digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
  • the speaker 170A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 10 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also known as "handset" is used to convert audio electrical signals into sound signals.
  • the electronic device 10 answers a phone call or voice message, the voice can be received by bringing the receiver 170B close to the ear.
  • the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C through a person's mouth, and input a sound signal to the microphone 170C.
  • the electronic device 10 may be provided with at least one microphone 170C. In other embodiments, the electronic device 10 may be provided with two microphones 170C. In addition to collecting sound signals, it may also implement a noise reduction function. In other embodiments, the electronic device 10 may further include three, four, or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the headset interface 170D is used to connect wired headsets.
  • the earphone interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device (open terminal) platform (OMTP) standard interface, and the American Telecommunications Industry Association (cellular telecommunications industry association of the United States, CTIA) standard interface.
  • OMTP open mobile electronic device
  • CTIA American Telecommunications Industry Association
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the gyro sensor 180B may be used to determine the movement posture of the electronic device 10.
  • the air pressure sensor 180C is used to measure air pressure.
  • the magnetic sensor 180D includes a Hall sensor.
  • the acceleration sensor 180E can detect the magnitude of acceleration of the electronic device 10 in various directions (generally three axes).
  • the distance sensor 180F is used to measure the distance.
  • the electronic device 10 can measure the distance by infrared or laser. In some embodiments, when shooting scenes, the electronic device 10 may use the distance sensor 180F to measure distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • LED light emitting diode
  • a light detector such as a photodiode.
  • the ambient light sensor 180L is used to sense the brightness of ambient light.
  • the electronic device 10 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 10 is in a pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the temperature sensor 180J is used to detect the temperature.
  • Touch sensor 180K also known as "touch panel”.
  • the touch sensor 180K may be provided on the display screen 194, and the touch sensor 180K and the display screen 194 constitute a touch screen, also called a "touch screen”.
  • the touch sensor 180K is used to detect a touch operation acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation may be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the electronic device 10, which is different from the location where the display screen 194 is located.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the key 190 includes a power-on key, a volume key, and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 10 can receive key input and generate key signal input related to user settings and function control of the electronic device 10.
  • the motor 191 may generate a vibration prompt.
  • the motor 191 can be used for vibration notification of incoming calls and can also be used for touch vibration feedback.
  • touch operations applied to different applications may correspond to different vibration feedback effects.
  • the motor 191 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminder, receiving information, alarm clock, game, etc.
  • Touch vibration feedback effect can also support customization.
  • the indicator 192 may be an indicator light, which may be used to indicate a charging state, a power change, and may also be used to indicate a message, a missed call, a notification, and the like.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be inserted into or removed from the SIM card interface 195 to achieve contact and separation with the electronic device 10.
  • the electronic device 10 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc.
  • the same SIM card interface 195 can insert multiple cards at the same time. The types of the multiple cards may be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 can also be compatible with external memory cards.
  • the electronic device 10 interacts with the network through the SIM card to realize functions such as call and data communication.
  • the electronic device 10 uses eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 10 and cannot be separated from the electronic device 10.
  • the 3D sensing module 196 can acquire depth data, and the depth data acquired during the photographing process can be passed to the GPU to perform 3D rendering of the image acquired by the camera 193.
  • the 3D sensing module 196 may be a time-of-flight (TOF) 3D sensing module or a structured light 3D sensing module, which may be disposed on top of the electronic device 10, such as the "bangs" position of the electronic device 10 (ie, FIG. 1B Area AA) shown in. It can be known that, in addition to the 3D sensing module 196, the area AA may also include a camera 193, a proximity light sensor 180G, a receiver 170B, a microphone 170C, and the like.
  • TOF time-of-flight
  • a structured light 3D sensing module which may be disposed on top of the electronic device 10, such as the "bangs" position of the electronic device 10 (ie, FIG. 1B Area AA) shown in.
  • the area AA may also include a camera 193, a proximity light sensor 180G, a receiver 170B, a microphone 170C, and the like.
  • the structured light 3D sensing module 196 is integrated in the electronic device 10 is used as an example for description.
  • the arrangement form of the structured light 3D sensing module 196 in the electronic device 10 is as follows: Modules such as camera 196-1 and dot matrix projector 196-2.
  • the dot matrix projector 196-2 includes a high-power laser (such as VCSEL) and diffractive optical components, etc., that is, a structured light emitter, used to emit a "structured" infrared laser light using a high-power laser and project it on an object surface.
  • the process of acquiring depth data by the structured light 3D sensing module 196 described above is: when the processor 110 detects that the current shooting mode is the portrait mode, the dot matrix projector 196-2 is controlled to start.
  • the high-power laser in the dot-matrix projector 196-2 emits infrared laser light. These infrared lasers are generated through the action of structures such as diffractive optical components in the dot-matrix projector 196-2 (such as about 30,000)
  • the spot of "structured" light is projected onto the surface of the shooting target.
  • the array formed by the light spots of these structured lights is reflected at different positions on the shooting target surface, and the infrared light camera 196-1 captures the light spots of the structured light reflected on the shooting target surface, thereby obtaining depth data of different positions on the shooting target surface, Then, the acquired depth data is uploaded to the processor 110.
  • the depth data acquired by the structured light 3D sensing module 196 can also be used for face recognition, for example, by unlocking the face of the owner when the electronic device 10 is unlocked.
  • the structured light 3D sensing module 196 may also include a floodlight illuminator, an infrared image sensor, and the aforementioned proximity light sensor 180G And other modules.
  • floodlight illuminators include low-power lasers (such as VCSEL) and homogenizers, etc., which are used to emit "unstructured" infrared laser light using a low-power laser and project it on the surface of an object.
  • the proximity light sensor 180G senses that the object is approaching the electronic device 10, thereby sending a signal to the processor 110 of the electronic device 10 that the object is approaching.
  • the processor 110 receives the signal that the object is approaching, and controls the flood illuminator to start.
  • the low-power laser in the flood illuminator projects infrared laser light onto the surface of the object.
  • the object surface reflects the infrared laser light projected by the floodlight illuminator.
  • the infrared camera captures the infrared laser light reflected by the object surface, thereby acquiring image information on the object surface, and then uploading the acquired image information to the processor 110 .
  • the processor 110 determines whether the object approaching the electronic device 10 is a human face according to the uploaded image information. When the processor 110 determines that the object close to the electronic device 10 is a human face, the dot matrix projector 196-2 is controlled to start.
  • the subsequent specific implementation is similar to the foregoing specific implementation when the current shooting mode is detected as the portrait mode, and depth data is acquired and uploaded to the processor 110.
  • the processor 110 compares and calculates the uploaded depth data with the user's facial feature data pre-stored in the electronic device 10, and recognizes whether the face close to the electronic device 10 is the user's face of the electronic device 10, if If yes, control the electronic device 10 to unlock; if no, control the electronic device 10 to continue to maintain the locked state.
  • the software system of the electronic device 10 may adopt a layered architecture, event-driven architecture, micro-core architecture, micro-service architecture, or cloud architecture.
  • the embodiment of the present invention takes an Android system with a layered architecture as an example to exemplarily explain the software structure of the electronic device 10.
  • FIG. 1C is a block diagram of the software structure of the electronic device 10 according to an embodiment of the present invention.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor.
  • the layers communicate with each other through a software interface.
  • the Android system is divided into four layers, from top to bottom are the application layer, the application framework layer, the Android runtime and the system library, and the kernel layer.
  • the application layer may include a series of application packages.
  • the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, and short message.
  • the application framework layer provides an application programming interface (application programming interface) and programming framework for applications at the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and so on.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display screen, determine whether there is a status bar, lock the screen, intercept the screen, etc.
  • Content providers are used to store and retrieve data, and make these data accessible to applications.
  • the data may include videos, pictures, audio, calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls for displaying text and controls for displaying pictures.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface that includes an SMS notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide the communication function of the electronic device 10. For example, the management of call status (including connection, hang up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear after a short stay without user interaction.
  • Android Runtime includes core library and virtual machine. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library contains two parts: one part is the function function that Java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in the virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer into binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library may include multiple functional modules. For example: surface manager (surface manager), media library (Media library), 3D graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
  • surface manager surface manager
  • media library Media library
  • 3D graphics processing library for example: OpenGL ES
  • 2D graphics engine for example: SGL
  • the surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
  • the media library supports a variety of commonly used audio, video format playback and recording, and still picture files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to realize 3D graphics drawing, image rendering, synthesis, and layer processing.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least the display driver, camera driver, audio driver, and sensor driver.
  • the following describes the workflow of the software and hardware of the electronic device 10 in combination with the usage scene of capturing a photograph.
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into original input events (including touch coordinates, time stamps and other information of touch operations).
  • the original input event is stored in the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, for example, the control corresponding to the click operation is a camera application icon.
  • the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer.
  • the camera 193 captures a still image or video.
  • FIG. 2 exemplarily shows a user interface for an application menu on the electronic device 10.
  • the user interface 20 in FIG. 2 may include a status bar 202, a time component icon 204 and a weather component icon 203, icons of multiple applications such as a camera icon 201, a WeChat icon 208, a settings icon 209, an album icon 207, a Weibo icon 206, Alipay icon 205 and the like, and the interface 20 may further include a page indicator 210, a phone icon 211, a short message icon 212, and a contact icon 213. among them:
  • the status bar 202 may include: an operator indicator (for example, the operator's name "China Mobile"), one or more signal strength indicators of a wireless fidelity (Wi-Fi) signal, and a mobile communication signal (also available) One or more signal strength indicators and battery status indicators (called cellular signals).
  • an operator indicator for example, the operator's name "China Mobile”
  • Wi-Fi wireless fidelity
  • a mobile communication signal also available
  • One or more signal strength indicators and battery status indicators called cellular signals.
  • the time component icon 204 may be used to indicate the current time, such as date, day of the week, hour and minute information, and so on.
  • the weather component icon 203 can be used to indicate weather types, such as cloudy to sunny, light rain, etc., and can also be used to indicate information such as air temperature.
  • the page indicator 210 can be used to indicate which page of the application the user is currently browsing. Users can slide the area of multiple application icons left and right to browse the application icons in other pages.
  • FIG. 2 only exemplarily shows the user interface on the electronic device 10, and should not constitute a limitation on the embodiments of the present application.
  • the electronic device 10 may detect a user operation acting on the camera icon 201, and in response to the operation, the electronic device 10 may display a user interface for taking pictures.
  • the user interface may be the user interface 30 involved in the embodiment of FIG. 3. That is to say, the user can click the camera icon 201 to open the user interface for taking pictures.
  • FIG. 3 exemplarily shows a user interface for image capturing.
  • the user interface may be a user interface that the user clicks on the camera icon 201 in the embodiment of FIG. 2 to open, but not limited to this, the user may also open a user interface for taking photos in other applications, for example, the user clicks a shooting control in WeChat To open the user interface for taking pictures.
  • the user interface 30 for taking pictures may include: a framing frame 301, a shooting control 302, a shooting mode list 303, a control 304, and a control 305. among them:
  • the framing frame 301 can be used to display the picture acquired by the camera 193.
  • the electronic device can refresh the display content in real time.
  • the camera 193 for acquiring pictures may be a rear camera or a front camera.
  • the shooting control 302 can be used to monitor user operations that trigger shooting.
  • the electronic device may detect a user operation on the shooting control 302 (such as a click operation on the shooting control 302), and in response to the operation, the electronic device 10 may determine the captured picture and display the captured picture in 305. That is to say, the user can click the shooting control 302 to trigger shooting.
  • the shooting control 302 may be a button or other forms of controls.
  • One or more shooting mode options may be displayed in the shooting mode list 303.
  • the electronic device 10 can detect the user operation acting on the shooting mode option, and in response to the operation, the electronic device 10 can turn on the shooting mode selected by the user.
  • the electronic device can also detect a sliding operation in the shooting mode list 303 (such as a sliding operation to the left or right), and in response to the operation, the electronic device 10 can switch the shooting mode options displayed in the shooting mode list 303 in order to Users browse more shooting mode options.
  • the shooting mode options may be icons or other forms of options.
  • the shooting mode list 303 may include: portrait shooting mode icon 303A, photo shooting mode icon 303B, video shooting mode icon 303C, large aperture shooting mode icon 303D, night scene shooting mode icon 303E, slow motion shooting mode icon 303F.
  • the control 304 may be used to monitor user operations that trigger camera switching.
  • the electronic device 10 can detect a user operation acting on the control 304 (such as a click operation on the control 304), and in response to the operation, the electronic device 10 can switch the camera (such as switching the rear camera to the front camera, or the front Set the camera to the rear camera).
  • the control 305 can be used to monitor user operations that trigger the opening of the album.
  • the electronic device 10 may detect a user operation (such as a click operation on the control 305) acting on the control 305, and in response to the operation, the electronic device 10 may open an album and display the newly saved picture.
  • UI user interface
  • FIG. 4 exemplarily shows a UI embodiment of the user interface 30 for the user to select the portrait shooting mode.
  • the electronic device 10 can detect a user operation acting on the portrait shooting mode option in the shooting mode list 303 (such as a click operation on the portrait shooting mode icon 303A), and in response to the operation, the electronic device 10 can Turn on the first shooting mode.
  • a user operation acting on the portrait shooting mode option in the shooting mode list 303 such as a click operation on the portrait shooting mode icon 303A
  • the electronic device 10 may also update the display state of the portrait shooting mode option, and the updated display state may indicate that the portrait shooting mode has been selected.
  • the updated display state may be the text information "Portrait” corresponding to the highlight shooting mode icon 303A.
  • the updated display state can also present other interface expressions, such as the font of the text information "Portrait” becomes larger, the text information "Portrait” is framed, the text information "Portrait” is underlined, and icons 303A color deepening, etc.
  • the electronic device 10 may also display the control 306 in the user interface 30.
  • the control 306 can be used to monitor user operations that open the option bar of the light effect template.
  • the electronic device 10 may detect a user operation acting on the control 306, and in response to the operation, the electronic device 10 may display a light effect template option bar 307 in the user interface 30, refer to FIG. 5.
  • the light effect template option bar 307 includes two or more light effect template options.
  • the light effect template option in the light effect template option bar 307 can be used to monitor the user's selection operation. Specifically, the electronic device may detect a user operation (such as a click operation on "light effect 1") that acts on the light effect template option in the light effect template option bar 307, and in response to the operation, the electronic device may determine that the selected The fixed light effect template is a light effect template for processing the captured image.
  • the selected light effect template may be a light effect template corresponding to the light effect template option used for the operation. For example, if the operation is an operation of clicking “light effect 1”, the selected light effect template is light effect template 1 corresponding to “light effect 1”.
  • the electronic device 10 may update the display state of the selected light effect template option, and the updated display state may indicate that the light effect template has been selected.
  • the updated display state may be to highlight the text information "light effect 1" corresponding to the selected light effect template icon.
  • the updated display state can also present other interface expressions, such as the font of the text information "light effect 1" becomes larger, the text information "light effect 1” is framed, and the text information "light effect 4" "Underlined, the color of the selected light effect template icon is darkened, etc.”
  • the selected light effect template is called a first light effect template.
  • the electronic device 10 can also detect a sliding operation (such as a left or right sliding operation) in the light effect template option bar 307, and in response to the operation, the electronic device 10 can switch the display in the light effect template option bar 307 Light effect template options, so that users can browse more light effect template options.
  • the electronic device 10 may display the light effect template option in the light effect template option bar according to the current shooting scene.
  • the light effect template option bar is used to monitor user operations for selecting the light effect template. Refer to FIG. 6.
  • the electronic device 10 in response to the operation, may also directly display the light effect template option bar in the user interface 30 without displaying the control 306, and monitor the user operation of opening the light effect template option bar through the control 306.
  • FIG. 6 exemplarily shows a UI embodiment in which the electronic device 10 recommends options of light effect templates according to the current shooting scene.
  • the arrangement order of the light effect template options included in the light effect template option bar 307 is as follows from left to right: light Effect 1, Effect 2, Effect 3, Effect 4, Effect 5.
  • the arrangement order of the light effect template options included in the light effect template option bar 307 is as follows from left to right: light effect 4, light effect 1, Light effect 2, light effect 3, light effect 5.
  • the electronic device 10 can use the light effect template 4 matching the shooting scene "Baiyun” as the recommended light effect template, and display the corresponding option "Light Effect 4" (307A) on the left side of the light effect template option bar 307 The first position.
  • the example is only an embodiment provided by the present application, and is not limited thereto, and other embodiments may also be possible.
  • the electronic device 10 can recommend the light effect template option according to the current shooting scene.
  • the display state of the light effect template option matching the current shooting scene is the first display state.
  • the first display state can be used to highlight the option, prompting the user that the template corresponding to the option is a template suitable for the current shooting scene, which is convenient for the user to quickly identify and select the option, and can effectively recommend the option to the user.
  • the first display state can be achieved in one or more of the following ways: the display position of the option in the option bar is the first position (such as the first display position on the left, the display position in the middle, etc.), the option is Highlighted, the font corresponding to the text information of the option (such as "Light Effect 1") is a large font, and the icon corresponding to the option presents a dynamic change (for example, a heartbeat effect).
  • the display position of the option in the option bar is the first position (such as the first display position on the left, the display position in the middle, etc.)
  • the option is Highlighted
  • the font corresponding to the text information of the option such as "Light Effect 1”
  • the icon corresponding to the option presents a dynamic change (for example, a heartbeat effect).
  • FIG. 7 exemplarily shows a UI embodiment in which the user interface 30 is used for a user to take a picture.
  • the option “light effect 4” is highlighted, indicating that the electronic device 10 has determined the option “light effect 4” "The corresponding light effect template 4 is the first light effect template.
  • the electronic device 10 can detect a user operation (such as a click operation on the shooting control 302) acting on the shooting control 302 ), In response to this operation, the electronic device 10 takes a picture and processes the picture using the light effect parameters corresponding to the first light effect template.
  • the picture taken by the electronic device 10 may be a picture taken by the electronic device 10 at the moment when the above user operation is detected.
  • the picture taken by the electronic device 10 may also be a series of pictures taken by the electronic device 10 within a period of time before the time when the user operation is detected. This period of time may be, for example, 5 ms, 10 ms, or the like.
  • the electronic device 10 may also display a thumbnail of the picture in the control 305, refer to FIG. 8.
  • the thumbnail of the picture contains fewer pixels than the picture.
  • the electronic device 10 can detect a user operation (such as a click operation on the control 305) acting on the control 305.
  • the electronic device 10 may display the user interface 40 of the picture processed by the light effect parameter of the first light effect template.
  • the user interface 40 may refer to FIG. 9. That is to say, the user can click on the control 305 to open the user interface 40 for displaying pictures.
  • the user can also open the user interface for displaying pictures in other applications, for example, the user clicks the icon 207 of the album application in the interface 20 to open the user interface for displaying pictures, and for example, the user clicks in WeChat Photo controls to open the user interface for displaying pictures.
  • 9-10 illustrate the user interface 40 by way of example.
  • the user interface 40 may include: a picture content display area 401 and controls 402.
  • the picture content display area 401 is used to display the picture generated after the light effect parameter processing of the first light effect template in the first shooting mode, and the picture may be referred to as a first picture.
  • the electronic device 10 may detect a user operation (such as a click operation on the control 402) acting on the control 402, and in response to the operation, the electronic device 10 may also display in the user interface 40: a light source indicator 403, a light intensity indicator 404, the light effect template option bar 405, the cancel control 406 and the save control 407, refer to FIG. 10. among them:
  • the light source indicator 403 is a virtual light source indicator set according to the light direction, and can be used to indicate the light direction of the actual light source in the shooting scene.
  • the electronic device 10 can recognize the lighting direction according to the image of the human face displayed in the view frame 301.
  • the specific way of identifying the light direction according to the image of the human face will be described in detail in the subsequent embodiment of FIG. 17, and will not be described in detail here.
  • the electronic device 10 may detect a user operation (such as a sliding operation on the light source indicator 403) acting on the light source indicator 403, and in response to the operation, the electronic device 10 may update the display light source indicator 403.
  • a user operation such as a sliding operation on the light source indicator 403 acting on the light source indicator 403
  • the electronic device 10 may update the display light source indicator 403.
  • the electronic device 10 may also update the picture displayed in the picture content display area 401 according to the light direction indicated by the updated light source indicator 403.
  • the light intensity adjuster 404 can be used to indicate the light intensity of the light source.
  • the electronic device 10 may detect a first user operation (such as a left-slide operation on the light intensity adjuster 404) acting on the light intensity adjuster 404, and in response to the operation, the electronic device 10 may update the display light intensity adjuster 404, The light intensity indicated by the updated light intensity adjuster 404 becomes weaker. In response to this operation, the electronic device 10 may also update the picture in the display picture content display area 401 according to the weakened light intensity.
  • the electronic device 10 may also detect a second user operation (such as a right slide operation on the light intensity adjuster 404) acting on the light intensity adjuster 404, and in response to the operation, the electronic device 10 updates the display light intensity adjuster 404, The light intensity indicated by the updated light intensity adjuster 404 becomes stronger.
  • the electronic device 10 may also update the picture in the display picture content display area 401 according to the increased light intensity.
  • the horizontal light intensity adjuster shown in 404 there may be other forms of light intensity adjusters, such as vertical light intensity adjusters, or light intensity adjusters presented in the form of plus and minus signs, this application
  • the embodiment does not limit this. It is not limited to the user operation of sliding left, and the first user operation may also be a user operation of sliding or clicking. It is not limited to the user operation of sliding right, and the second user operation may also be a user operation of sliding up or clicking. The embodiments of the present application do not limit this.
  • the light effect template option bar 405 may include two or more light effect template options, and the options of the first light effect template in the light effect template option bar 405 are specially marked (as shown in the option "light effect 4" in FIG. 10) Indicates that the picture displayed in the picture content display area 401 has been processed by the light effect parameter corresponding to the first light effect template.
  • the electric device 10 can detect a user operation (such as a click operation on the option "light effect 3") acting on the second light effect template option in the light effect template option bar 405, and in response to the operation, the electronic device 10 updates the display The display status of the second light effect template option and the first light effect template option.
  • a user operation such as a click operation on the option "light effect 3”
  • the second light effect template is other light effect templates except the first light effect template.
  • the display state of the updated second light effect template option may indicate that the second light effect template has been selected, and the updated display state of the first light effect template option may indicate that the first light effect template has been deselected.
  • the display state of the updated second light effect template option may be the same as the display state of the first light effect template option before update. For details, reference may be made to the description in the embodiment of FIG. 6, which is not repeated here.
  • the display state of the updated first light effect template option may be consistent with the display state of the second light effect template option before update.
  • the electronic device 10 may also update the picture in the displayed picture content display area 401 according to the light effect parameter corresponding to the second light effect template.
  • the adjustment of the light direction by the light source indicator 403, the adjustment of the light intensity by the light intensity adjuster 404, and the switching of the light effect template by the light effect template option bar 405 can be called light effect editing.
  • the cancel control 406 may be used to monitor user operations that trigger the cancellation of light effect editing.
  • the electronic device 10 can detect a user operation (such as a click operation on the cancel control 406) acting on the cancel control 406, and in response to the operation, the electronic device 10 can cancel the light effect editing of the first picture and update the display picture content display
  • the picture in the area 401 is the first picture. That is to say, the user can click the cancel control 406 to trigger the cancellation of the light effect editing of the first picture.
  • the save control 407 can be used to monitor user operations that trigger to save the second picture.
  • the second picture is a picture displayed in the picture content display area 401.
  • the electronic device 10 can detect a user operation (such as a click operation on the save control 407) acting on the save control 407, and in response to the operation, the electronic device 10 can save the picture displayed in the picture content display area 401.
  • the second picture may be a picture generated after editing the light effect of the first picture. That is to say, the user can click the save control 407 to trigger saving of the second picture generated after editing the light effect of the first picture.
  • FIG. 11 is a schematic flowchart of an image processing method provided by the present application.
  • the image processing methods provided in this application are mainly divided into three major processes: taking pictures, rendering of light effects, and editing of light effects.
  • the following describes the electronic device as the main subject and expands the description:
  • the photographing process mainly includes the following S101-S105.
  • S101 The electronic device starts the first shooting mode.
  • the manner in which the electronic device 10 turns on the first shooting mode may include but is not limited to the following:
  • the electronic device 10 can detect the user operation acting on the portrait shooting mode option in the user interface 30 through the touch sensor 180K to turn on the first shooting mode.
  • the electronic device 10 can detect the user operation acting on the portrait shooting mode option in the user interface 30 through the touch sensor 180K to turn on the first shooting mode.
  • the electronic device 10 can detect the user operation acting on the control 304 in the user interface 30 through the touch sensor 180K to turn on the first shooting mode. That is to say, when the electronic device 10 is switched from the rear camera to the front camera, the electronic device 10 can start the first shooting mode.
  • the electronic device 10 can also determine whether there is a human face in the framing frame 301, and if so, further determine whether the human face meets the requirements. If there is no face, a prompt message such as "no face detected” is displayed in the framing frame 301. If it is determined that the face does not meet the requirements, a prompt message such as "a face that meets the requirements is not detected” is displayed in the framing frame 301.
  • the face that meets the requirements can be one or any combination of the following: a single face, the angle of the face does not exceed the first threshold, the ratio of the area of the face in the framing frame 301 to the total area of the picture to be taken is greater than Or equal to the second threshold. The detection method of the face angle is described in subsequent embodiments, and will not be described in detail here.
  • S102 The electronic device recognizes the shooting scene of the picture to be taken.
  • the electronic device may acquire the RGB data of the picture to be shot, input the RGB data of the picture to be shot into the first model, and output the identified shooting scene.
  • the first model is trained from RGB data of a large number of pictures of known shooting scenes.
  • the output result of the first model may be a binary character string.
  • the value of the character string represents a shooting scene, and the correspondence between the character string and the shooting scene may be stored in the internal memory 121 of the electronic device 10 in the form of a table. For example, 001 represents scene 1, 010 represents scene 2, 011 represents scene 3, 100 represents scene 4, and so on.
  • the electronic device 10 may search for the shooting scene corresponding to the character string in the table according to the character string output by the first model.
  • the number of digits of the character string can be determined according to all kinds of shooting scenes.
  • the output form of the first model in the embodiment of the present application is exemplarily described, and there may be other output forms in a specific implementation, which is not limited in the embodiment of the present application.
  • the electronic device 10 can also recognize the lighting direction of the human face, and the recognition result of the lighting direction can be used in the subsequent light effect rendering process and light effect editing process.
  • the recognition process of the light direction is described in the subsequent embodiments, and will not be described in detail here.
  • S103 The electronic device displays an option bar of the light effect template according to the shooting scene of the picture to be taken.
  • the electronic device 10 may store a mapping relationship table between the shooting scene and the matching light effect template. After the electronic device 10 finds the light effect template matching the current shooting scene according to the mapping relationship table, it can set the display state of the option of the matched light effect template to the first display state, for details, refer to the description in the embodiment of FIG. 6 , Not repeated here.
  • mapping relationship tables The following exemplarily shows several mapping relationship tables.
  • the shooting scene and the matching light effect template are in a one-to-one correspondence. As shown in Table 1.
  • Table 1 The mapping relationship between the shooting scene and the matching light effect template
  • the option corresponding to the light effect template 1 is displayed as “light effect 1” in the interface 30, and the light effect template 2 to the light effect template 5 are similar and will not be described in detail.
  • a shooting scene in the mapping relationship table may correspond to multiple light effect templates with different matching degrees. Take a shooting scene corresponding to three light effect templates with different matching degrees (high, medium and low) as an example, as shown in Table 2.
  • the light effect template (high) in Table 2 represents a light effect template with a high degree of matching
  • the light effect template (middle) represents a light effect template with a degree of matching
  • the light effect template (low) represents a light effect template with a low degree of matching.
  • the electronic device 10 may search for three light effect templates matching the shooting scene according to Table 2, and display the options corresponding to the matched light effect templates in the forefront of the light effect template option bar according to the matching degree from high to low. .
  • mapping relationship table The relationship between the above shooting scene and the matched light effect template in the form of a mapping relationship table is only an exemplary description, and there may be other forms in a specific implementation, which is not limited in the embodiments of the present application.
  • S104 The electronic device receives a user operation for selecting the first light effect template.
  • the user operation for selecting the first light effect template may be a user operation acting on the first light effect template option in the option bar of the light effect template, as shown in the click operation on the icon 307A in the embodiment of FIG. 6, here No details.
  • the first light effect template is turned on, so that after the electronic device 10 receives the photographing instruction, the electronic device 10 can determine that the first light effect template is used for processing shooting For the light effect template of the image, reference may be made to the description of the embodiment in FIG. 5, which is not repeated here.
  • S105 The electronic device receives a photographing instruction when the first light effect template has been selected.
  • the photographing instruction may be an instruction generated by a user operation acting on the photographing control 302, which can be specifically seen in the description of the embodiment of FIG. 7, which is not repeated here.
  • the electronic device 10 may acquire RGB data and depth data at time t1.
  • the RGB data and the depth data need to be coordinately aligned to obtain RGBD data (RGB data and depth data) that are aligned in time and coordinates for subsequent Light effect rendering process and light effect editing process.
  • the RGB data collection device may be a rear camera
  • the depth data collection device may be a rear camera
  • the electronic device 10 may calculate depth data based on the RGB data collected by the rear camera. After the electronic device 10 starts the first shooting mode, the depth data can be calculated in real time.
  • the RGB data collection device may be a front camera, and the depth data collection device may be a 3D sensing module 196. After starting the first shooting mode, the electronic device 10 can collect depth data in real time.
  • the process of light effect rendering mainly includes S106. It can be known from the foregoing description in S105 that the depth data can be calculated from the RGB data acquired by the rear camera, or can also be acquired by the 3D sensing module 196. The depth data used in the light effect rendering process involved in the embodiments of the present application will be described by taking the data collected by the 3D sensing module 196 as an example.
  • S106 The electronic device uses the light effect parameters corresponding to the first light effect template to process the captured picture to generate the first picture.
  • the light effect parameters corresponding to the first light effect template are used to perform light effect rendering on the captured picture.
  • the process of light effect rendering may include face light effect rendering, or include face light effect rendering and overall light effect rendering.
  • face light effect rendering is the light effect rendering of the face part in the picture
  • overall light effect rendering is the light effect rendering of the entire picture.
  • the process of light effect editing mainly includes the following S107-S108.
  • S107 The electronic device receives an instruction of the user to edit the light effect of the first picture.
  • the electronic device 10 displays the first picture generated in S105 in the user interface 40.
  • the electronic device 10 displays the first picture generated in S105 in the user interface 40.
  • the electronic device 10 displays the first picture
  • the instruction of the user to edit the light effect of the first picture may be generated by the electronic device 10 detecting a user operation acting on the control 402.
  • the electronic device 10 may further display the light source indicator 403, the light intensity indicator 404, the light effect template option bar 405, and cancel the control in the user interface 40
  • the electronic device 10 may further display the light source indicator 403, the light intensity indicator 404, the light effect template option bar 405, and cancel the control in the user interface 40
  • the electronic device 10 can also display an indicator of the projection position of the texture pattern in the user interface 40, and the user can manually adjust the indicator to change the projection pattern on the background and portrait to enhance The interaction between the user and the electronic device 10.
  • S108 The electronic device generates a second picture and saves the second picture.
  • the electronic device 10 detects the user operation acting on the light source indicator 403, or the user operation acting on the light intensity regulator 404, or the user operation acting on the second light effect template option in the light effect template option bar 405 In response to the above user operation, the electronic device 10 may generate a second picture and display it in the picture content display area 401.
  • the electronic device 10 After the electronic device 10 detects a user operation acting on the save control 407, in response to the operation, the electronic device 10 saves the second picture.
  • the internal memory 121 of the device 10 allows the user to directly call the above intermediate result when editing the first picture light effect, thereby reducing the amount of calculation.
  • the user can manually adjust the light direction, the light source intensity in the first picture, and change the light effect template, which can enhance the interaction between the user and the electronic device 10 and improve the user experience.
  • FIG. 12 shows the rendering of face light effects involved in the embodiments of the present application, which may specifically include the following steps:
  • Phase one (S201): Establish a three-dimensional model.
  • S201 The electronic device establishes a three-dimensional model based on RGBD data.
  • the process of building a three-dimensional model includes the following steps:
  • S2012 Perform hole filling operation on the RGBD data to remove outliers, that is, interpolation, so that the data is continuous, smooth and free of holes.
  • S2013 Perform filtering operation on the RGBD data after hole filling to remove noise.
  • the mesh is usually composed of triangles, quadrilaterals, or other simple convex polygons to simplify the rendering process.
  • the grid is composed of triangles as an example for description.
  • Stage two (S202-S203): Segment the picture.
  • S202 The electronic device divides the captured picture into two parts, an adult portrait and a background, and obtains a portrait segmentation result.
  • the captured picture is a picture formed by the RGB data collected by the electronic device 10 through the front camera 193 (hereinafter referred to as RGB picture).
  • the RGB picture includes multiple pixels, and the pixel value of each pixel is the RGB value.
  • the electronic device 10 may calculate the RGB data acquired by the front camera 193 to obtain a portrait segmentation map.
  • a portrait segmentation map For example, you can use the edge-based segmentation method for portrait segmentation, that is, calculate the gray value of each pixel and find the set of continuous pixels on the boundary line of two different areas in the picture. The pixels on both sides of these continuous pixels There is a significant difference in the gray value or it is located at the turning point of the gray value rising or falling.
  • other methods can also be used, such as threshold-based segmentation method, region-based segmentation method, graph-based segmentation method, energy functional-based segmentation method, etc.
  • the above method for segmenting portraits is only an exemplary description, which is not limited in the embodiments of the present application.
  • the portrait segmentation diagram is shown in FIG. 13, the white part is the portrait part, and the black part is the background part.
  • the portrait and background are segmented to obtain the portrait and background parts, which can also be used to render the portrait and background parts separately when the entire picture is subsequently rendered.
  • the specific rendering process can be seen in the description of subsequent embodiments, which will not be described here.
  • S203 The electronic device performs facial feature segmentation on the portrait part, and obtains the facial feature segmentation result.
  • the electronic device inputs the RGB data of the face part in the portrait part to the third model, and can output the segmentation result, and the segmentation result includes facial features (eyes, nose, eyebrows, mouth, ears), skin, hair, and other parts.
  • the segmentation result output by the third model a facial features segmentation map can be obtained. As shown in FIG. 14, regions with different gray levels represent different parts.
  • the third model is trained from a large number of RGB data of face parts with known segmentation results.
  • the output form of the third model can represent the part to which a certain pixel belongs in a specific binary number (e.g.
  • the processor 110 may represent pixels belonging to the same part in the same grayscale, and represent pixels in different parts in different grayscales.
  • the above method for segmenting facial features is only an exemplary description, and there may be other methods for segmenting facial features in a specific implementation, which is not limited in the embodiments of the present application.
  • S201 may be executed first and then S202-S203, or S202-S203 may be executed first and then S201.
  • Stage three Calculate the gray value of each pixel in the three layers separately.
  • S204 The electronic device inputs the grid data into the diffuse reflection model, and outputs the diffuse reflection layer.
  • the diffuse reflection model uses the Oren-Nayar reflection model.
  • the input data of the Oren-Nayar reflection model includes grid data, a facial feature segmentation map, the light source intensity and the light source direction when the light source illuminates each pixel, Oren -
  • the output data of Nayar reflection model is the gray value of each pixel, which is called diffuse reflection layer.
  • the parameters of the Oren-Nayar reflection model belong to the light effect parameter set corresponding to the first light effect template, and are determined by the light effect template selected in S104.
  • the intensity of the light source when the light source illuminates each pixel can be calculated by the Linearly Transformed Cosines (LTC) algorithm.
  • LTC Linearly Transformed Cosines
  • S205 The electronic device inputs the grid data into the specular reflection model and outputs the specular layer.
  • the specular reflection model uses the GGX reflection model, and the input data and the diffuse reflection model input data are consistent with the output data, and the output of the GGX reflection model is called a specular layer.
  • the parameters of the GGX reflection model belong to the light effect parameter set corresponding to the first light effect template, and are determined by the light effect template selected in S104.
  • S206 The electronic device calculates whether each grid is occluded, and if it is occluded, performs shadow rendering on the grid and outputs a shadow layer.
  • each grid is occluded can be calculated separately according to the light source direction and grid data. If it is blocked, the gray value of the pixel corresponding to the grid is set to the lowest, if it is not blocked, the gray value of the pixel corresponding to the grid is set to the highest, and finally each pixel after outputting the shadow rendering
  • the grayscale value is called the shadow layer.
  • the highest gray value can be determined by the gray level of the picture. In the embodiment of the present application, the gray level of the picture is 2, the highest gray value is 1, and the lowest gray value is 0.
  • the occlusion relationship of each grid is calculated according to the actual light direction in the identified photographed scene, and the gray value of the pixel corresponding to the grid is set according to the occlusion relationship, which can increase the shadow of strong sense of reality effect.
  • the gray level of each pixel output in the above S204 and S205 may be 256, and the gray value range of each pixel output is [0,1], that is, the gray value in the range [0,255] is normalized to [ 0,1] gray value, so that the gray value range of each pixel output in S204 and S205 is consistent with the gray value range of each pixel output in S206, which is convenient for the three layers in S207 ( Diffuse layer, highlight layer and shadow layer) for overlay fusion.
  • the sequence of the above S204, S205, and S206 is not limited.
  • S207 The electronic device superimposes and merges the diffuse reflection layer, the highlight layer, and the shadow layer, and outputs a face light rendering result according to the fusion result and RGB data.
  • the diffuse reflection layer output in S204, the highlight layer output in S205, and the shadow layer output in S206 are superimposed and fused, that is, the gray values of the pixels at the same position in each layer are weighted and summed To get the gray value of each pixel after superimposed fusion.
  • the weight of the gray value of the pixels of each layer is the layer fusion parameter.
  • the layer fusion parameter belongs to the light effect parameter set corresponding to the first light effect template and is determined by the light effect template selected in S104. Multiply the gray value of each pixel after superimposed fusion with the pixel value of the pixel to obtain the pixel value of each pixel after the rendering of the facial light effect, which is the rendering result of the facial light effect.
  • the pixel value range of each pixel in the embodiment of the present application may be [0,255].
  • the virtual light source is placed at a light source position determined according to the light direction so that the light effect applied later does not conflict with the original light of the picture, and each light source is calculated according to the intelligently recognized light direction
  • the occlusion relationship of the grid and set the gray value of the pixel corresponding to the grid according to the occlusion relationship, rendering the shadow caused by the occlusion, especially the shadow cast by the light of the eye socket and nose, which greatly enhances the stereoscopic effect of the face.
  • the specific process may include the following steps:
  • Phase one (S301): Gaussian blur.
  • S301 The electronic device performs Gaussian blur on the background part of the RGB image.
  • the RGB picture is a picture obtained based on RGB data acquired by the front camera 193.
  • the pixel value of each pixel in the background part is weighted and averaged with the pixel values of neighboring pixels to calculate the pixel value of the pixel after Gaussian blur.
  • Stage two Calculate the projection texture layer of the portrait and the projection texture layer of the background separately.
  • S302 The electronic device calculates the texture coordinates of each grid vertex according to the texture pattern projection direction and the portrait grid.
  • the position coordinates of the texture pattern projection are known, the direction of the projection is known, and the projection matrix can be calculated.
  • the projection matrix is the connection matrix between the space coordinate system where the portrait grid is located and the space coordinate system where the texture pattern is projected.
  • the space coordinate system where the portrait grid is located can take the center of the portrait grid as the origin of the coordinate system, the horizontal direction to the right is the positive x-axis direction, the horizontal forward is the positive y-axis direction, and the vertical upward is the positive z-axis direction.
  • the spatial coordinate system where the texture pattern is projected takes this position as the origin of the coordinate system, and the x-axis, y-axis, and z-axis are parallel to the x-axis, y-axis, and z-axis of the portrait grid, respectively.
  • the projection pattern projected on the portrait grid can be determined according to the stretch ratio of the projection texture on the x-axis and y-axis, and the pixel value of the projection texture.
  • the coordinate position of the mesh vertex in the projection pattern is the texture coordinate.
  • the projection direction of the texture pattern, the position coordinates of the projection of the texture pattern, the stretching ratio of the projected texture on the x-axis and y-axis, and the pixel value of the projected texture belong to the light effect parameter set, which is determined by the light effect template selected in S104.
  • the electronic device extracts the pixel value of the corresponding texture pattern according to the texture coordinates of each mesh vertex, and outputs the projected texture layer of the portrait.
  • the coordinate position of the grid vertex in the projection pattern is known, and the projection pattern projected on the portrait grid is known.
  • the pixel value of the texture pattern corresponding to each grid vertex can be extracted to obtain all the pixels in the portrait grid
  • the pixel value of the texture pattern corresponding to the grid is called the projected texture layer of the portrait.
  • S304 The electronic device sets a projection plane perpendicular to the location of the portrait in the background of the portrait.
  • a virtual projection plane is set in the background part.
  • the virtual projection plane is perpendicular to the ground on which the portrait is located.
  • S305 The electronic device calculates the texture coordinates of the pixels in the projection plane according to the projection direction of the texture pattern and the projection plane.
  • the determination of the projection pattern of the projection plane is similar to the determination of the projection pattern on the portrait grid, which will not be repeated here.
  • the texture coordinate of a pixel in the projection plane is the coordinate position of the pixel in the projection pattern.
  • the electronic device extracts the pixel value of the corresponding texture pattern according to the texture coordinates of the projection plane, and outputs the projected texture layer of the background.
  • the coordinate position of the pixel point in the projection pattern is known
  • the projection pattern projected on the projection plane is known
  • the pixel value of the texture pattern for each pixel point can be extracted to obtain the corresponding values of all the pixel points on the projection plane
  • the pixel value of the texture pattern is called the projected texture layer of the background.
  • Phase three (S307-S308): superimposed fusion.
  • S307 The electronic device superimposes and merges the projection texture layer of the portrait, the rendering effect of the face light effect and the RGB image.
  • the pixel value of the pixel at the same position in the portrait texture layer of the portrait in S303, the rendering result of the face light effect in S207 and the portrait part of the RGB image obtained by the front camera 193 are weighted and summed
  • the pixel value of each pixel of the portrait part after superimposed fusion can be obtained.
  • the weights of the projected texture layer of the portrait, the rendering result of the face light effect and the pixel values of the pixels in the RGB image obtained by the front camera 193 belong to the light effect parameter set corresponding to the first light effect template.
  • the selected light effect template is determined.
  • S308 The electronic device overlays and merges the background projection texture layer and the Gaussian blurred background.
  • the pixel values of the pixels at the same position of the projected texture layer of the background in S306 and the background part after Gaussian blur in S301 are weighted and summed to obtain each pixel of the background part after superimposed fusion Pixel value.
  • the weights of the pixel values of the pixels in the background projection texture layer and the Gaussian blurred background belong to the light effect parameter set corresponding to the first light effect template, which is determined by the light effect template selected in S104.
  • the background projection texture layer and the Gaussian blurred background are superimposed and fused in the background part, so that the image after the light effect rendering has the light effect background, and the trace of the original background is retained, which increases the Photorealism.
  • Stage 4 Post-processing of pictures.
  • S309 The electronic device performs post-processing on the superimposed and fused picture.
  • the superimposed and fused picture includes the fusion result of the portrait part in S307 and the fusion result of the background part in S308 to form an entire picture.
  • the post-processing may include processing the hue, contrast, and filters of the entire picture.
  • the tone processing is mainly to adjust the overall color tendency of the whole picture by adjusting the H value.
  • Contrast processing is mainly to adjust the ratio of the brightness of the brightest part to the darkest part of the whole picture.
  • Filter processing is to calculate the pixel value of each pixel after filter processing through a matrix and the pixel value of each pixel in the entire picture to adjust the overall effect of the entire picture.
  • the H value in the above tone processing, the ratio of the brightness of the brightest part to the darkest part in the contrast process, and the matrix in the filter process all belong to the light effect parameter set corresponding to the first light effect template, and the light effect selected in S104
  • the template decides.
  • the portrait part and the background part can be rendered separately, and the real depth data collected by the 3D sensing module 196 can be used to make the light effect fluctuate on the portrait, increasing the realism and stereoscopic effect of the picture.
  • the result of the face light effect rendering output in S207 is the pixel value of each pixel of the first picture. That is to say, the first picture can be obtained after the light effect of the face is rendered.
  • the process of light effect rendering may include face light effect rendering and overall light effect rendering
  • the overall light effect rendering is continued to calculate the pixel value of each pixel of the first picture. That is to say, the first picture can be obtained after the overall light effect is rendered.
  • the display screen 194 displays the user interface 20.
  • the user interface 20 displays application icons of multiple applications, including the camera application icon 201.
  • the touch sensor 180K detects that the user clicks the camera application icon 201.
  • the touch sensor 180K reports the event that the user clicks the camera application icon 201 to the processor 110.
  • the processor 110 determines an event that the user clicks on the camera application icon 201, and issues an instruction to the display screen 194 to display the user interface 30.
  • the display screen 194 displays the user interface 30 in response to the instruction issued by the processor 110.
  • the processor 110 determines an event that the user clicks on the camera application icon 201, and issues an instruction to the camera 193 to turn on the camera 193.
  • the camera 193 turns on the rear camera in response to an instruction issued by the processor 110, and collects RGB data of the picture to be taken in real time.
  • the touch sensor 180K detects that the user clicks on the control 306.
  • the touch sensor 180K reports the event that the user clicks on the control 306 to the processor 110.
  • the processor 110 determines an event that the user clicks on the control 306 and issues an instruction to the camera 193 to turn on the front camera.
  • the camera 193 turns on the front camera in response to an instruction issued by the processor 110.
  • the front camera can collect RGB data of the picture to be taken in real time, and save the RGB data of the picture to be taken to the internal memory 121.
  • the RGB data of the picture to be captured collected in real time may carry a time stamp, so that the processor 110 performs time alignment processing on the RGB data and the depth data in subsequent processing.
  • the touch sensor 180K detects that the user clicks the icon 303A.
  • the touch sensor 180K reports the event that the user clicks the icon 303A to the processor 110.
  • the processor 110 determines the event that the user clicks the icon 303A, and starts the first shooting mode.
  • the processor 110 adjusts shooting parameters such as aperture size, shutter speed, and sensitivity.
  • the processor 110 can also determine whether there is a human face in the framing frame 301, and if so, further determine whether the human face meets the requirements, as described in S101, and will not be repeated here. Next, we will introduce the detection method of face angle in detail.
  • the above-mentioned face angle may be the face angle in three-dimensional space
  • the first threshold includes three data, which are the pitch angle (pitch) rotating around the x-axis and the y-axis in the standard three-dimensional coordinate system Yaw angle (yaw), the roll angle around the z-axis (roll)
  • the direction, the horizontal forward direction is the positive direction of the y-axis
  • the vertical upward direction is the positive direction of the z-axis.
  • the angle of the face that meets the requirements may be that the pitch angle is less than or equal to 30 °
  • the yaw angle is less than or equal to 30 °
  • the roll angle is less than or equal to 35 °. 35 °.
  • Face angle detection can be achieved by building a three-dimensional model of the face. Specifically, the three-dimensional model of the face to be detected can be established through depth data, and then the standard three-dimensional model stored in the internal memory 121 is rotated until the standard three-dimensional model matches the three-dimensional model of the face to be detected, then the angle of rotation of the standard three-dimensional model The angle of the face to be detected.
  • the above method for detecting the face angle is only an exemplary description, and there may be other detection methods in a specific implementation, which is not limited in the embodiments of the present application.
  • the second threshold may be, for example, 4%, 10%, or the like.
  • the first threshold and the second threshold are not limited to the above listed values, and may be other values in a specific implementation, which is not limited in this embodiment of the present application.
  • the processor 110 issues the display status of the update icon 303A to the display screen 194.
  • the display 194 updates the display state of the display icon 303A in response to an instruction issued by the processor 110.
  • the processor 110 sends an instruction to collect depth data to the 3D sensing module 196.
  • the 3D sensing module 196 collects depth data in real time in response to instructions sent by the processor 110.
  • the 3D sensing module 196 saves the depth data to the internal memory 121.
  • the depth data collected in real time may carry a time stamp, so that the processor 110 performs time alignment processing on the RGB data and the depth data in subsequent processing.
  • the touch sensor 180K detects that the user clicks on the control 306.
  • the touch sensor 180K reports the event that the user clicks on the control 306 to the processor 110.
  • the processor 110 determines the event that the user clicks on the control 306 and sends an instruction to the display screen 194 to display the option bar 307 of the light effect template.
  • the display screen 194 displays the light effect template option bar 307 in response to the instruction sent by the processor 110.
  • the processor 110 reads the RGB data and depth data of the picture to be taken from the internal memory 121.
  • the recognition method of the shooting scene can be described in S102.
  • the identification method of the light direction will be introduced in detail.
  • the processor 110 may input the RGB data of the face part to the second model, and output the result of the lighting direction of the face.
  • the second model is trained from a large number of RGB data of face parts with known light directions.
  • the direction of the face light includes three data in three-dimensional space, the angle ⁇ with the xoy plane, the angle ⁇ with the xoz plane, and the angle ⁇ with the yoz plane, where the origin o is the position of the nose of the face, horizontal To the right is the positive direction of the x-axis, horizontally forward is the positive direction of the y-axis, and vertically upward is the positive direction of the z-axis, then the output result of the second model is ( ⁇ , ⁇ , ⁇ ).
  • the virtual light source can be placed in the light source position determined according to the lighting direction during the subsequent rendering of the light effect, so that the light effect applied later does not conflict with the original lighting of the picture;
  • the position of the virtual light source can be displayed on the interface 40.
  • the user can change the picture effect by adjusting the position of the virtual light source to improve the interaction between the user and the electronic device 10;
  • the position of the virtual light source is displayed to enhance the fun during the photographing process.
  • the processor 110 saves the light direction recognition result to the internal memory 121, so that the result can be directly called in subsequent processing.
  • the processor 110 reads the mapping relationship table between the shooting scene and the matching light effect template from the internal memory 121.
  • the processor 110 determines a light effect template matching the shooting scene (assuming light effect template 4).
  • the processor 110 sends an instruction to update the display light effect template option bar 307 to the display screen 194.
  • the display state of the light effect template option matching the shooting scene is the first display state.
  • the display screen 194 updates the display light effect template option bar 307 in response to the instruction issued by the processor 110.
  • the touch sensor 180K detects that the user clicks the first light effect template option.
  • the touch sensor 180K reports the event that the user clicks the first light effect template option to the processor 110.
  • the processor 110 determines an event that the user clicks on the first light effect template option, and sends to the display screen 194 to update the display state of the first light effect template option.
  • the display screen 194 updates the display state of the first light effect template option in response to the instruction of the processor 110.
  • the touch sensor 180K detects that the user clicks on the shooting control 302.
  • the touch sensor 180K reports the event that the user clicks on the shooting control 302 to the processor 110.
  • the processor 110 determines the event that the user clicks on the shooting control 302, and reads the RGB data of the picture to be shot stored in the internal memory 121.
  • the processor 110 determines the event that the user clicks on the shooting control 302, and reads the depth data stored in the internal memory 121.
  • the time stamp of the depth data is consistent with the time stamp of the RGB data of the picture to be taken read in 38, so as to ensure the time alignment of the RGB data and the depth data.
  • the processor 110 aligns the RGB data with the depth data coordinates to obtain RGBD data with time and coordinates aligned.
  • FIG. 21 to FIG. 23 describe in detail the cooperation relationship of various components in the electronic device 10 in the light effect editing.
  • the display screen 194 displays the first picture in the control 305.
  • the touch sensor 180K detects that the user clicks on the control 305.
  • the touch sensor 180K reports the event that the user clicks the control 305 to the processor 110.
  • the processor 110 determines an event that the user clicks on the control 305, and sends an instruction to display the user interface 40 to the display screen 194.
  • the display screen 194 displays the user interface 40 in response to the instruction sent by the processor 110.
  • the touch sensor 180K detects that the user clicks on the control 402.
  • the touch sensor 180K reports the event that the user clicks on the control 402 to the processor 110.
  • the processor 110 determines an event that the user clicks on the control 402, and sends an instruction to update the display user interface 40 to the display screen 194.
  • the display screen 194 updates and displays the user interface 40 in response to the instruction sent by the processor 110.
  • the updated user interface 40 may include: a light source indicator 403, a light intensity adjuster 404, a light effect template option bar 405, a cancel control 406, a save control 407, and the like.
  • the touch sensor 180K detects that the user slides the light source indicator 403. 11. The touch sensor 180K reports the event that the user slides the light source indicator 403 to the processor 110.
  • the processor 110 determines an event that the user slides the light source indicator 403, and sends an instruction to update the display light source indicator 403 to the display screen 194.
  • the display screen 194 updates the display light source indicator 403 in response to the instruction sent by the processor 110.
  • the processor 110 determines a new lighting direction, and determines a picture in the picture content display area 401 according to the new lighting direction.
  • the processor 110 sends an instruction to the display screen 194 to update the picture displayed in the picture content display area 401.
  • the display screen 194 updates the pictures in the picture content display area 401 in response to the instruction sent by the processor 110.
  • the user inputs a sliding operation on the light source indicator 403 to move the light source indicator 403 from (x1, y1) to (x2, y2), as shown in FIG. 23.
  • the touch sensor 180K detects the user's sliding operation on the light source indicator 403, and reports an event (the user's sliding operation on the light source indicator 403) to the processor 110.
  • the processor 110 calculates the picture content display according to the new light direction
  • the RGB data of the picture in the area 401 that is, the pixel value of each pixel included in the picture.
  • the display screen 194 is caused to display the light source indicator 403 at (x2, y2), and the picture in the display picture content display area 401 is updated. It can be known that the user's sliding operation on the light source indicator 403 is a continuous action.
  • the electronic device 10 can update and display the light source indicator 403 and the picture in the display content display area 401 in real time.
  • the above calculation of the RGB data of the picture in the picture content display area 401 according to the new illumination direction may be to recalculate the occlusion relationship of each grid of the face in the face light rendering section, and reset the gray value of each grid according to the occlusion relationship , Output shadow layer. Furthermore, the diffuse reflection layer, the highlight layer and the shadow layer are superimposed and fused, and the face light rendering result is output according to the fusion result and the RGB data.
  • the touch sensor 180K detects that the user slides the light intensity adjuster 404.
  • the touch sensor 180K reports the event that the user slides the light intensity adjuster 404 to the processor 110.
  • the processor 110 determines the event that the user slides the light intensity adjuster 404, and sends an instruction to update the display light intensity adjuster 404 to the display screen 194.
  • the display screen 194 updates the display picture content to display the light intensity adjuster 404.
  • the processor 110 determines a new light intensity, and determines a picture in the picture content display area 401 according to the new light intensity.
  • the processor 110 sends an instruction to update the display in the picture content display area 401 to the display screen 194.
  • the display screen 194 updates the pictures in the picture content display area 401 in response to the instruction sent by the processor 110.
  • the user's sliding operation on the light source indicator 403 is a continuous action.
  • the electronic device 10 may update and display the light source indicator 403 and the picture in the picture content display area 401 in real time.
  • the touch sensor 180K detects that the user clicks on the second light effect template option.
  • the touch sensor 180K reports the event that the user clicks on the second light effect template option to the processor 110.
  • the processor 110 determines an event that the user clicks on the second light effect template option, and sends an instruction to the display screen 194 to update and display the first light effect template option and the second light effect template option.
  • the display screen 194 updates and displays the first light effect template option and the second light effect template option in response to the instruction sent by the processor 110.
  • the processor 110 determines the pictures in the picture content display area 401 according to the light effect parameters corresponding to the second light effect template.
  • determining the picture in the picture content display area 401 is to calculate the RGB data of the picture in the picture content display area 401 according to the light effect parameter corresponding to the second light effect template.
  • the processor 110 sends an instruction to update the picture displayed in the picture content display area 401 to the display screen 194.
  • the display screen 194 updates the pictures in the picture content display area 401 in response to the instruction sent by the processor 110.
  • the sequence of the above 26-27 and 28-30 is not limited.
  • the sequence of the above 10-16, 17-23, 24-30 is not limited.
  • S106 may include some or all of 10-16, 17-23, 24-30, which is not limited in the embodiments of the present application.
  • the touch sensor 180K detects that the user clicks the save control 407.
  • the touch sensor 180K reports the event that the user clicks the save control 407 to the processor 110.
  • the processor 110 determines the event that the user clicks the save control 407, saves the second picture to the internal memory 121, and deletes the first picture from the internal memory.
  • the pictures displayed updated in 16, 23, and 30 are the second pictures.
  • Saving the second picture to the internal memory 121 is to save the RGB data of the second picture to the internal memory 121.
  • Deleting the first picture from the internal memory 121 means deleting the RGB data of the first picture from the internal memory 121.
  • the image processing method provided in this embodiment of the present application can recommend a suitable light effect template to the user according to the identified shooting scene during the photographing process, which can enable the user to quickly select a suitable light effect template, reduce user operations, and improve the use efficiency of the mobile phone.
  • recommending a suitable light effect template for the user according to the shooting scene may be implemented during the light effect editing process.
  • the electronic device 10 may enable the first shooting mode during the photographing process, receive the user's operation to select the first light effect template, and then receive the user's photographing instruction to complete the photographing process and determine the picture to be photographed.
  • the electronic device 10 After receiving the photographing instruction from the user, the electronic device 10 renders the photo to be captured according to the light effect parameter corresponding to the first light effect template to generate the first image. Before the light effect rendering, the electronic device 10 can recognize the light direction of the human face, and perform light effect rendering of the face in combination with the light direction of the face during the light effect rendering process.
  • the specific face light effect rendering process can be implemented with reference to FIG. 12 Example description.
  • the electronic device 10 may set the display state of the light effect template option matching the shooting scene to the first display state in the light effect template option bar,
  • the user is reminded that the template corresponding to the option is a template suitable for the current shooting scene, which facilitates the user to quickly identify and select the option, and can effectively recommend the option to the user.
  • the electronic device 10 can identify the shooting scene before the light effect editing process and determine the light effect template matching the shooting scene.
  • the recognition process of the shooting scene is similar to the method described in S102 in the embodiment of FIG. 11, and the method of determining the light effect template matching the shooting scene is similar to the method described in S103 in the embodiment of FIG. 11, which is not repeated here.
  • the first display state is the same as the first display state in the embodiment of FIG. 6 and will not be repeated here.
  • the embodiments of the present application also provide a computer-readable storage medium. All or part of the processes in the above method embodiments may be completed by a computer program instructing relevant hardware.
  • the program may be stored in the above-mentioned computer storage medium. When the program is executed, the processes may include the processes of the above method embodiments.
  • the computer readable storage medium includes: read-only memory (read-only memory, ROM) or random access memory (random access memory, RAM), magnetic disk or optical disk, and other media that can store program codes.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a dedicated computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted through the computer-readable storage medium.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device including a server, a data center, and the like integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)), or the like.
  • the modules in the device of the embodiment of the present application may be combined, divided, and deleted according to actual needs.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne un procédé de capture d'image. Le procédé peut être appliqué à un dispositif électronique et permet au dispositif électronique, lors de l'utilisation d'un premier mode de capture d'image pour capturer une image, de recommander un modèle d'effet d'éclairage approprié à un utilisateur en fonction d'une scène dans laquelle une image doit être capturée dans le premier mode de capture d'image, de façon à réduire les opérations d'utilisateur et à améliorer l'efficacité du dispositif électronique. Le procédé peut comprendre : la mise en marche d'une caméra afin de capturer une image d'un objet ; l'affichage d'une première interface utilisateur, la première interface utilisateur comprenant : une première région d'affichage, une liste de modes de capture d'image et un champ de sélection de modèle d'effet d'éclairage ; l'affichage, dans la première région d'affichage, d'une image à capturer par la caméra ; et l'affichage en évidence, dans le champ de sélection de modèle d'effet d'éclairage, d'une sélection de modèle d'effet d'éclairage correspondant à une scène dans laquelle une image doit être capturée, la scène dans laquelle une image doit être capturée étant une scène correspondant à l'image affichée dans la première région d'affichage.
PCT/CN2018/116443 2018-11-20 2018-11-20 Procédé de traitement d'image et dispositif électronique WO2020102978A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/116443 WO2020102978A1 (fr) 2018-11-20 2018-11-20 Procédé de traitement d'image et dispositif électronique
CN201880094372.1A CN112262563B (zh) 2018-11-20 2018-11-20 图像处理方法及电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/116443 WO2020102978A1 (fr) 2018-11-20 2018-11-20 Procédé de traitement d'image et dispositif électronique

Publications (1)

Publication Number Publication Date
WO2020102978A1 true WO2020102978A1 (fr) 2020-05-28

Family

ID=70773103

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/116443 WO2020102978A1 (fr) 2018-11-20 2018-11-20 Procédé de traitement d'image et dispositif électronique

Country Status (2)

Country Link
CN (1) CN112262563B (fr)
WO (1) WO2020102978A1 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287790A (zh) * 2020-10-20 2021-01-29 北京字跳网络技术有限公司 影像处理方法、装置、存储介质及电子设备
CN112866773A (zh) * 2020-08-21 2021-05-28 海信视像科技股份有限公司 一种显示设备及多人场景下摄像头追踪方法
CN114979457A (zh) * 2021-02-26 2022-08-30 华为技术有限公司 一种图像处理方法及相关装置
CN115334239A (zh) * 2022-08-10 2022-11-11 青岛海信移动通信技术股份有限公司 前后摄像头拍照融合的方法、终端设备和存储介质
CN115439616A (zh) * 2022-11-07 2022-12-06 成都索贝数码科技股份有限公司 基于多对象图像α叠加的异构对象表征方法
WO2023142690A1 (fr) * 2022-01-25 2023-08-03 华为技术有限公司 Procédé de prise de vue de restauration et dispositif électronique
WO2024055823A1 (fr) * 2022-09-16 2024-03-21 荣耀终端有限公司 Procédé et dispositif d'interaction d'interface d'application de caméra
WO2024082863A1 (fr) * 2022-10-21 2024-04-25 荣耀终端有限公司 Procédé de traitement d'images et dispositif électronique

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113645408B (zh) * 2021-08-12 2023-04-14 荣耀终端有限公司 拍摄方法、设备及存储介质
CN114422736B (zh) * 2022-03-28 2022-08-16 荣耀终端有限公司 一种视频处理方法、电子设备及计算机存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1936685A (zh) * 2005-09-21 2007-03-28 索尼株式会社 摄影设备、处理信息的方法和程序
CN101945217A (zh) * 2009-07-07 2011-01-12 三星电子株式会社 拍摄设备和方法
US20120057051A1 (en) * 2010-09-03 2012-03-08 Olympus Imaging Corp. Imaging apparatus, imaging method and computer-readable recording medium
CN103533244A (zh) * 2013-10-21 2014-01-22 深圳市中兴移动通信有限公司 拍摄装置及其自动视效处理拍摄方法
CN104243822A (zh) * 2014-09-12 2014-12-24 广州三星通信技术研究有限公司 拍摄图像的方法及装置
CN104660908A (zh) * 2015-03-09 2015-05-27 深圳市中兴移动通信有限公司 拍摄装置及其拍摄模式的自动匹配方法
CN106027902A (zh) * 2016-06-24 2016-10-12 依偎科技(南昌)有限公司 一种拍照方法及移动终端
CN108734754A (zh) * 2018-05-28 2018-11-02 北京小米移动软件有限公司 图像处理方法及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580920B (zh) * 2013-10-21 2018-03-13 华为技术有限公司 一种成像处理的方法及用户终端
CN105578056A (zh) * 2016-01-27 2016-05-11 努比亚技术有限公司 拍摄的终端及方法
JP6702752B2 (ja) * 2016-02-16 2020-06-03 キヤノン株式会社 画像処理装置、撮像装置、制御方法及びプログラム
CN108540716A (zh) * 2018-03-29 2018-09-14 广东欧珀移动通信有限公司 图像处理方法、装置、电子设备及计算机可读存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1936685A (zh) * 2005-09-21 2007-03-28 索尼株式会社 摄影设备、处理信息的方法和程序
CN101945217A (zh) * 2009-07-07 2011-01-12 三星电子株式会社 拍摄设备和方法
US20120057051A1 (en) * 2010-09-03 2012-03-08 Olympus Imaging Corp. Imaging apparatus, imaging method and computer-readable recording medium
CN103533244A (zh) * 2013-10-21 2014-01-22 深圳市中兴移动通信有限公司 拍摄装置及其自动视效处理拍摄方法
CN104243822A (zh) * 2014-09-12 2014-12-24 广州三星通信技术研究有限公司 拍摄图像的方法及装置
CN104660908A (zh) * 2015-03-09 2015-05-27 深圳市中兴移动通信有限公司 拍摄装置及其拍摄模式的自动匹配方法
CN106027902A (zh) * 2016-06-24 2016-10-12 依偎科技(南昌)有限公司 一种拍照方法及移动终端
CN108734754A (zh) * 2018-05-28 2018-11-02 北京小米移动软件有限公司 图像处理方法及装置

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112866773A (zh) * 2020-08-21 2021-05-28 海信视像科技股份有限公司 一种显示设备及多人场景下摄像头追踪方法
CN112866773B (zh) * 2020-08-21 2023-09-26 海信视像科技股份有限公司 一种显示设备及多人场景下摄像头追踪方法
CN112287790A (zh) * 2020-10-20 2021-01-29 北京字跳网络技术有限公司 影像处理方法、装置、存储介质及电子设备
CN114979457A (zh) * 2021-02-26 2022-08-30 华为技术有限公司 一种图像处理方法及相关装置
CN114979457B (zh) * 2021-02-26 2023-04-07 华为技术有限公司 一种图像处理方法及相关装置
WO2023142690A1 (fr) * 2022-01-25 2023-08-03 华为技术有限公司 Procédé de prise de vue de restauration et dispositif électronique
CN115334239A (zh) * 2022-08-10 2022-11-11 青岛海信移动通信技术股份有限公司 前后摄像头拍照融合的方法、终端设备和存储介质
CN115334239B (zh) * 2022-08-10 2023-12-15 青岛海信移动通信技术有限公司 前后摄像头拍照融合的方法、终端设备和存储介质
WO2024055823A1 (fr) * 2022-09-16 2024-03-21 荣耀终端有限公司 Procédé et dispositif d'interaction d'interface d'application de caméra
WO2024082863A1 (fr) * 2022-10-21 2024-04-25 荣耀终端有限公司 Procédé de traitement d'images et dispositif électronique
CN115439616A (zh) * 2022-11-07 2022-12-06 成都索贝数码科技股份有限公司 基于多对象图像α叠加的异构对象表征方法
CN115439616B (zh) * 2022-11-07 2023-02-14 成都索贝数码科技股份有限公司 基于多对象图像α叠加的异构对象表征方法

Also Published As

Publication number Publication date
CN112262563B (zh) 2022-07-22
CN112262563A (zh) 2021-01-22

Similar Documents

Publication Publication Date Title
WO2020102978A1 (fr) Procédé de traitement d'image et dispositif électronique
WO2020125410A1 (fr) Procédé de traitement d'image et dispositif électronique
KR102535607B1 (ko) 사진 촬영 중 이미지를 표시하는 방법 및 전자 장치
WO2020134891A1 (fr) Procédé de prévisualisation de photo pour dispositif électronique, interface graphique d'utilisateur et dispositif électronique
WO2021169394A1 (fr) Procédé d'embellissement d'une image du corps humain sur la base de la profondeur et dispositif électronique
WO2020029306A1 (fr) Procédé de capture d'image et dispositif électronique
WO2022017261A1 (fr) Procédé de synthèse d'image et dispositif électronique
CN113170037B (zh) 一种拍摄长曝光图像的方法和电子设备
CN112712470A (zh) 一种图像增强方法及装置
CN113935898A (zh) 图像处理方法、系统、电子设备及计算机可读存储介质
CN110138999B (zh) 一种用于移动终端的证件扫描方法及装置
CN113973189B (zh) 显示内容的切换方法、装置、终端及存储介质
CN113810603B (zh) 点光源图像检测方法和电子设备
US20240153209A1 (en) Object Reconstruction Method and Related Device
CN113542580B (zh) 去除眼镜光斑的方法、装置及电子设备
CN112150499A (zh) 图像处理方法及相关装置
CN114756184A (zh) 协同显示方法、终端设备及计算机可读存储介质
CN115964231A (zh) 基于负载模型的评估方法和装置
CN114444000A (zh) 页面布局文件的生成方法、装置、电子设备以及可读存储介质
WO2023000746A1 (fr) Procédé de traitement vidéo à réalité augmentée et dispositif électronique
CN114283195B (zh) 生成动态图像的方法、电子设备及可读存储介质
WO2021204103A1 (fr) Procédé de prévisualisation d'images, dispositif électronique et support de stockage
CN113495733A (zh) 主题包安装方法、装置、电子设备及计算机可读存储介质
WO2024114257A1 (fr) Procédé de génération d'effet dynamique de transition et dispositif électronique
CN114140314A (zh) 一种人脸图像处理方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18940727

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18940727

Country of ref document: EP

Kind code of ref document: A1