WO2021077911A1 - 图像泛光处理方法及装置、存储介质 - Google Patents

图像泛光处理方法及装置、存储介质 Download PDF

Info

Publication number
WO2021077911A1
WO2021077911A1 PCT/CN2020/113088 CN2020113088W WO2021077911A1 WO 2021077911 A1 WO2021077911 A1 WO 2021077911A1 CN 2020113088 W CN2020113088 W CN 2020113088W WO 2021077911 A1 WO2021077911 A1 WO 2021077911A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
brightness area
brightness
class
processing
Prior art date
Application number
PCT/CN2020/113088
Other languages
English (en)
French (fr)
Inventor
邓一鑫
魏冬
杨磊
李桢
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20879379.4A priority Critical patent/EP4036842A4/en
Publication of WO2021077911A1 publication Critical patent/WO2021077911A1/zh
Priority to US17/726,674 priority patent/US20220245778A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • This application relates to the field of image processing technology, and in particular to an image flood processing method and device, and a storage medium.
  • Floodlight is a common optical phenomenon, which generally refers to the blooming phenomenon that occurs when a physical camera shoots objects with high brightness. Flooding the image can visually improve the contrast of the image, enhance the expressiveness of the image, and achieve a better rendering effect.
  • image flooding processing has been widely used in fields such as three-dimensional games and animation production.
  • the process of performing flood processing on an image by an electronic device with an image flood processing function includes: firstly performing brightness filtering processing on the original image, removing pixels with pixel values below the brightness threshold in the original image to obtain a filtered image. Then use the reduction magnifications of 1/4, 1/8, 1/16 and 1/32 to reduce the pixel sampling processing of the filtered image, and the resolutions are respectively 1/4, 1/8, 1/ Four low-resolution images at 16 times and 1/32 times. Then perform Gaussian blurring on the four low-resolution images respectively. Finally, image fusion processing is performed on the original image and the four low-resolution images that have undergone Gaussian blurring to obtain a flooded image of the original image.
  • the original image includes two parts: a small-size brightness area and a large-size brightness area.
  • Gaussian blurring is performed on low-resolution images with a resolution of 1/4 and 1/8 times of the original image. It is used to realize the flooding processing of the small-sized brightness area on the original image, so that the small-sized brightness area on the original image can achieve the flooding effect; for the resolution of 1/16 times and 1/32 times of the original image, respectively
  • Gaussian blur processing for low-resolution images is mainly used to implement flooding processing for large-size brightness areas on the original image, so that the large-size brightness areas on the image can achieve the flooding effect.
  • the electronic device with the image flood processing function will perform the above flood processing process on each frame of the original image separately. Since one flood processing process includes multiple Gaussian blur processing for brightness areas of different sizes in the original image, However, Gaussian blur processing has a relatively high computational complexity, so the current electronic equipment with the image flood processing function turned on has a large load and high power consumption during operation.
  • This application provides an image flood processing method and device, a storage medium, etc., to reduce the load and power consumption during the operation of an electronic device with an image flood processing function turned on.
  • an image flood processing method includes: the electronic device acquires a first brightness area category of a first image, and the first brightness area category includes one or more brightness areas of the first image. After determining that the first brightness area class of the first image is the same as the target brightness area class of the second image subjected to flooding processing, the electronic device obtains the first intermediate image obtained after Gaussian blurring is performed on the target brightness area class of the second image image. The electronic device generates a flooded image of the first image based on the first image and the first intermediate image.
  • the object of the flooding process may be an image obtained based on a three-dimensional scene rendering; or may be an image directly generated after a physical camera shoots a certain scene.
  • the first image and the second image are both images obtained by rendering based on a three-dimensional scene as an example for description.
  • the first image is rendered based on the first three-dimensional scene
  • the second image is rendered based on the second three-dimensional scene.
  • the first image and the second image may be two consecutive frames of images, that is, the second image is the previous frame of the first image.
  • the electronic device only needs to store the previous frame image of the currently displayed image, which can complete the judgment operation of the first brightness area category of the first image and the target brightness area category of the second image, and the electronic device needs to store The number of images is small, which can ensure the storage performance of the electronic device.
  • this application when the electronic device determines that the first brightness area type of the first image is the same as the target brightness area type of the second image subjected to flooding processing, it can directly obtain the Gaussian target brightness area type of the second image.
  • the intermediate image obtained after the blur processing does not need to perform Gaussian blur processing for the first brightness area category of the first image. Therefore, this application can reduce the Gaussian blurring of the first image under the premise of ensuring the flooding effect of the first image.
  • the number of blur processing further reduces the complexity of the image flooding processing process, thereby reducing the load of the electronic device during operation after the flooding processing function is turned on, and reducing the power consumption of the electronic device.
  • the first brightness area class of the first image is the same as the target brightness area class of the second image, including:
  • the state information of the object model corresponding to the brightness area in the first brightness area class in the first three-dimensional scene is the same as the state information of the object model corresponding to the brightness area in the target brightness area class in the second three-dimensional scene, and the first three-dimensional
  • the camera parameters of the scene are the same as the camera parameters of the second three-dimensional scene.
  • the process for the electronic device to determine whether the first brightness area type of the first image is the same as the target brightness area type of the second image may include:
  • the electronic device acquires state information of all object models corresponding to the first brightness area class in the first three-dimensional scene, and state information of all object models corresponding to the target brightness area class in the second three-dimensional scene. After determining that the state information of all object models corresponding to the first brightness area class in the first three-dimensional scene is the same as the state information of all object models corresponding to the target brightness area class in the second three-dimensional scene, the electronic device acquires the camera of the first three-dimensional scene Parameters and the camera parameters of the second three-dimensional scene. After determining that the camera parameters of the first three-dimensional scene are the same as the camera parameters of the second three-dimensional scene, the electronic device determines that the first brightness area class is the same as the target brightness area class.
  • the state information of the object model may include pose information and surface material information.
  • the pose information may include the position of the object model, the posture of the object model, and the scaling factor of the object model.
  • the surface material information may include: color information of the surface material of the object model and texture information of the surface material.
  • Camera parameters include: camera pose parameters, window parameters and field of view parameters.
  • the state information of all object models corresponding to the first brightness area class in the first 3D scene is the same as the state information of all object models corresponding to the target brightness area class in the second 3D scene, which refers to the first brightness in the first brightness area class.
  • the regions correspond one-to-one to the second brightness regions in the target brightness region class, and the state information of the object model corresponding to each first brightness region is the same as the state information of the object model corresponding to the corresponding second brightness region.
  • the electronic device when the first brightness area type of the first image is the same as the target brightness area type of the second image subjected to flooding processing, the electronic device can directly obtain the first brightness area obtained after Gaussian blur processing is performed on the target brightness area. An intermediate image, and the first intermediate image is taken as an image after Gaussian blur processing is performed on the first brightness area class of the first image. At this time, the electronic device with the image flooding function enabled does not need to perform Gaussian blur processing for the first brightness area category of the first image during the operation process, which reduces the operating load of the electronic device and reduces the electronic device's operating load. Power consumption.
  • the first image further includes a second brightness region
  • the second brightness region category includes other brightness regions in the first image except for the background brightness region.
  • the electronic device may also perform Gaussian blur processing for the second brightness area type to obtain the second intermediate image . Then the electronic device generates the flooded image of the first image based on the first image and the first intermediate image, including: the electronic device performs image fusion processing on the first image, the first intermediate image, and the second intermediate image to obtain the first image Floodlight image.
  • the electronic device may first determine whether there is a certain brightness area type in the second image subjected to flooding processing that is the same as the second brightness area type of the first image, and when determining the second brightness area type of the first image and the second image After any of the brightness area classes of is different, Gaussian blur processing is performed for the second brightness area class to obtain a second intermediate image.
  • the electronic device can first determine whether there is a certain brightness area type in the second image subjected to flooding processing that is the same as the second brightness area type of the first image. Refer to the above electronic device to determine the first brightness area of the first image. Whether the class is the same as the target brightness area class of the second image that has undergone flooding processing, this application will not go into details.
  • the electronic device when the size of the brightness area in the first brightness area category is larger than the size of the brightness area in the second brightness area category, the electronic device performs a Gaussian blurring process for the second brightness area category of the first image,
  • the method includes: performing reduced pixel sampling processing on the first image by using the first reduction magnification to obtain the first reduced image.
  • Gaussian blur processing is performed on the first reduced image to obtain a second intermediate image.
  • the electronic device performs Gaussian blur processing for the second brightness area class of the first image, including:
  • the second reduced magnification is used to perform reduced pixel sampling processing on the first image to obtain a second reduced image.
  • Gaussian blur processing is performed on the second reduced image to obtain a second intermediate image.
  • the first reduction magnification is greater than the second reduction magnification.
  • the first reduction magnification can refer to a single reduction magnification, or can refer to a set of multiple reduction magnifications.
  • n can take a value of -2 or -3
  • the first reduction magnification k1 includes 1/4 and 1/8.
  • the second reduction magnification can refer to a single reduction magnification, or can refer to a set of multiple reduction magnifications.
  • m can take a value of -4 or -5
  • the second reduction magnification k2 includes 1/16 and 1/32.
  • the electronic device can also directly when it is determined that a certain brightness area class is the same as the second brightness area class of the first image in the second image after flooding processing.
  • the intermediate image is obtained after Gaussian blur processing is performed on the same brightness region class in the second image as the second brightness region class, without the need for Gaussian blur processing for the second brightness region class of the first image. Therefore, this application guarantees Under the premise of the flooding effect of the first image, the number of Gaussian blur processing on the first image can be further reduced, thereby reducing the complexity of the image flooding processing process, thereby reducing the number of The load during operation reduces the power consumption of electronic equipment.
  • the electronic device may also perform Gaussian blur processing for the second brightness area class to obtain the second intermediate image. Then the electronic device generates the flooded image of the first image based on the first image and the first intermediate image, including: the electronic device performs image fusion processing on the first image, the first intermediate image, and the second intermediate image to obtain the first image Floodlight image.
  • the process for the electronic device to acquire the first brightness area category of the first image may include: traversing the tags of all object models in the first three-dimensional scene, and the tag is used to indicate whether the object model is a background object model. Acquire all background-type object models in the first three-dimensional scene, and the first brightness area category includes background-type brightness areas corresponding to all background-type object models in the first image.
  • the electronic device may further perform brightness filtering processing on the first image.
  • the electronic device performs brightness filtering processing on the first image, that is, eliminates pixels in the first image with pixel values less than the brightness threshold, so as to retain pixels in the first image with pixel values greater than or equal to the brightness threshold.
  • an image flood processing device in the second aspect, includes a plurality of functional modules, and the plurality of functional modules interact with each other to implement the above-mentioned first aspect and the methods in each implementation manner thereof.
  • the multiple functional modules can be implemented based on software, hardware, or a combination of software and hardware, and the multiple functional modules can be combined or divided arbitrarily based on specific implementations.
  • an image flood processing device such as a terminal
  • the image flood processing device includes a processor and a memory.
  • the processor usually includes a CPU and a GPU.
  • the memory is used to store a computer program; the CPU is used to implement any one of the aforementioned image flooding processing methods in the first aspect when executing the computer program stored in the memory.
  • These two types of processors can be two chips, or they can be integrated on the same chip.
  • a storage medium is provided, and the storage medium may be non-volatile.
  • a computer program is stored in the storage medium, and when the computer program is executed by the processing component, the processing component implements any one of the image flooding processing methods in the first aspect described above.
  • a computer program or computer program product containing computer-readable instructions.
  • the computer program or computer program product runs on a computer, the computer executes any one of the image flooding processing methods in the first aspect.
  • the computer program product may include one or more program units for implementing any one of the image flooding processing methods in the first aspect.
  • a chip such as a CPU
  • the chip includes a logic circuit, and the logic circuit may be a programmable logic circuit.
  • the chip is running, it is used to implement any one of the image flooding processing methods in the first aspect described above.
  • a chip such as a CPU
  • the chip includes one or more physical cores and a storage medium, and the one or more physical cores implement any one of the aforementioned image flooding processing methods in the first aspect after reading the computer instructions in the storage medium.
  • the image flooding processing method provided by the present application, because the electronic device can determine that the first brightness area type in the first image is the same as the target brightness area type of the second image after flooding processing,
  • the intermediate image is obtained directly after Gaussian blur processing is performed on the target brightness area of the second image, without the need for Gaussian blur processing for the first brightness area of the first image. Therefore, this application is guaranteeing the flooding of the first image.
  • the number of Gaussian blur processing on the first image can be reduced, thereby reducing the complexity of the image flooding processing process, thereby reducing the load of the electronic device during operation after the flooding processing function is turned on, and reducing The power consumption of electronic equipment.
  • FIG. 1 is a schematic diagram of flooding in a shooting scene provided by an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of an electronic device involved in an image flood processing method provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of an electronic device provided by an embodiment of the present application in combination with hardware and software;
  • FIG. 4 is a schematic diagram of a process of creating a 3D scene provided by an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of an image obtained by rendering a 3D scene created by a graphics application according to an embodiment of the present application
  • FIG. 6 is a flowchart of an image flood processing method provided by an embodiment of the present application.
  • FIG. 7 is a flowchart of a method for judging whether the first brightness area type of the first image is the same as the target brightness area type of the second image according to an embodiment of the present application;
  • FIG. 8 is a flowchart of another image flood processing method provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a 3D game interface provided by an embodiment of the present application.
  • FIG. 10 is a flowchart of yet another image flood processing method provided by an embodiment of the present application.
  • FIG. 11 is a schematic flowchart of an electronic device implementing an image flooding processing method according to an embodiment of the present application.
  • FIG. 12 is a block diagram of an image flood processing device provided by an embodiment of the present application.
  • FIG. 13 is a block diagram of another image flood processing device provided by an embodiment of the present application.
  • FIG. 14 is a block diagram of another image flood processing device provided by an embodiment of the present application.
  • FIG. 15 is a block diagram of still another image flood processing device provided by an embodiment of the present application.
  • FIG. 16 is a schematic structural diagram of another image flood processing device provided by an embodiment of the present application.
  • Image post-processing refers to the process of optimizing the image, which is mainly used to realize the enhancement of image anti-aliasing, high dynamic range image (HDR) and flooding.
  • Image post-processing techniques include image processing techniques such as flood processing, anti-aliasing processing, motion blur processing and depth of field processing.
  • Image post-processing technology can be considered to be similar to PS (photoshop) filter processing technology to a certain extent.
  • the object of image post-processing may be an image obtained by rendering based on a three-dimensional scene.
  • FIG. 1 is a schematic diagram of flooding in a shooting scene provided by an embodiment of the present application. As shown in Figure 1, there is a bright window in the shooting scene.
  • the indoor object shot by the physical camera has a clear outline.
  • the light emitted by the sun will exceed the contour of the sun itself and produce a blur effect around its own contour, that is, flooding.
  • floodlight effect In computer graphics, floodlight effect, also known as highlight, is a computer graphics effect used in video games, presentation animations, and HDR.
  • the flooding effect will produce stripes or feather-like light around high-brightness objects to blur image details, i.e., imitate the flooding phenomenon in the imaging process of a physical camera, and make the image rendered by the electronic device appear more realistic.
  • the image By flooding the image, the image can have a flooding effect, thereby visually improving the contrast of the image, enhancing the expressiveness and authenticity of the image, and achieving a better rendering effect.
  • the object of the flooding process may be an image rendered based on a 3D scene; it may also be an image directly generated after a physical camera shoots a certain scene.
  • the subject of the flooding process is an image obtained by rendering based on a 3D scene as an example for description.
  • the current process of performing flood processing on an image by an electronic device with an image flood processing function includes: first performing brightness filtering processing on the original image, removing pixels with pixel values below the brightness threshold in the original image to obtain a filtered image. Then use the reduction magnifications of 1/4, 1/8, 1/16 and 1/32 to reduce the pixel sampling processing of the filtered image, and the resolutions are respectively 1/4, 1/8, 1/ Four low-resolution images at 16 times and 1/32 times. Then perform Gaussian blurring on the four low-resolution images respectively. Finally, image fusion processing is performed on the original image and the four low-resolution images that have undergone Gaussian blurring to obtain a flooded image of the original image.
  • the electronic device with the image flood processing function performs the above flood processing process on each frame of the original image separately, because one flood processing process includes multiple Gaussian blur processing for brightness areas of different sizes in the original image, and
  • the Gaussian blur processing has a relatively high computational complexity, so the current electronic equipment with the image flood processing function turned on has a large load and high power consumption during operation.
  • the embodiment of the application provides an image flood processing method.
  • an electronic device with the image flood processing function performs flood processing on a first image
  • the first brightness area category of the first image can be obtained.
  • the first intermediate image obtained after Gaussian blur processing is performed on the target brightness area class of the second image is obtained, and the first intermediate image is based on the first image and The first intermediate image generates a flooded image of the first image.
  • the electronic device determines that the first brightness area type of the first image is the same as the target brightness area type of the second image subjected to flooding processing, it can directly obtain the target brightness area type for the second image.
  • the intermediate image obtained after Gaussian blur processing does not need to perform Gaussian blur processing for the first brightness area category of the first image. Therefore, the embodiment of the present application can reduce the number of interferences on the first image under the premise of ensuring the flooding effect of the first image.
  • the number of times an image is subjected to Gaussian blur processing reduces the complexity of the image flood processing process, thereby reducing the load of the electronic device during operation after the flood processing function is turned on, and reducing the power consumption of the electronic device.
  • the first image and the second image are both images obtained based on 3D scene rendering as an example for description.
  • the first image is rendered based on the first 3D scene
  • the second image is rendered based on the second 3D scene.
  • the second image is an image displayed by the electronic device before displaying the first image.
  • the first image and the second image may be two consecutive frames of images, that is, the second image is the previous frame of the first image.
  • the electronic device only needs to store the previous frame image of the currently displayed image, which can complete the judgment operation of the first brightness area category of the first image and the target brightness area category of the second image, and the electronic device needs to store The number of images is small, which can ensure the storage performance of the electronic device.
  • FIG. 2 is a schematic structural diagram of an electronic device 200 related to an image flooding processing method provided by an embodiment of the present application.
  • the electronic device 200 can be, but is not limited to, a laptop computer, a desktop computer, a mobile phone, a smart phone, a tablet computer, a multimedia player, an e-reader, a smart car device, a smart home appliance, an artificial intelligence device, a wearable device, a thing Networked equipment, or virtual reality/augmented reality/mixed reality equipment, etc.
  • the electronic device 200 may include a processor 210, an external memory interface 220, an internal memory 221, a universal serial bus (USB) interface 230, a charging management module 240, a power management module 241, a battery 242, antenna 1, antenna 2 , Mobile communication module 250, wireless communication module 260, audio module 270, speaker 270A, receiver 270B, microphone 270C, earphone jack 270D, sensor module 280, buttons 290, motor 291, indicator 292, camera 293, display 294, and Subscriber identification module (subscriber identification module, SIM) card interface 295, etc.
  • SIM Subscriber identification module
  • the sensor module 280 may include a pressure sensor 280A, a gyroscope sensor 280B, an air pressure sensor 280C, a magnetic sensor 280D, an acceleration sensor 280E, a distance sensor 280F, a proximity light sensor 280G, a fingerprint sensor 280H, a temperature sensor 280J, a touch sensor 280K, and ambient light Sensor 280L, bone conduction sensor 280M, etc.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 200.
  • the electronic device 200 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the interface connection relationship between the modules illustrated in the embodiment of the present application is merely a schematic description, and does not constitute a structural limitation of the electronic device 200.
  • the electronic device 200 may also adopt different interface connection modes (for example, bus connection mode) in the above-mentioned embodiments, or a combination of multiple interface connection modes.
  • the processor 210 may include one or more processing units, such as a central processing unit (CPU) (such as an application processor (AP)), a graphics processing unit (GPU), and further , It may also include a modem processor, image signal processor (image signal processor, ISP), microcontroller unit (microcontroller unit, MCU), video codec, digital signal processor (digital signal processor, DSP), Baseband processor, and/or neural-network processing unit (NPU), etc. Among them, the different processing units may be independent devices or integrated in one or more processors.
  • CPU central processing unit
  • AP application processor
  • GPU graphics processing unit
  • modem processor image signal processor
  • ISP image signal processor
  • microcontroller unit microcontroller unit
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • a memory may also be provided in the processor 210 for storing instructions and data.
  • the memory in the processor 210 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 210. If the processor 210 needs to use the instruction or data again, it can be directly called from the memory. Repeated access is avoided, the waiting time of the processor 210 is reduced, and the efficiency of the system is improved.
  • the processor 210 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transceiver (universal asynchronous) interface.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal Serial Bus
  • the I2C interface is a bidirectional synchronous serial bus, which includes a serial data line (SDA) and a serial clock line (SCL).
  • the processor 210 may include multiple sets of I2C buses.
  • the processor 210 may be coupled to the touch sensor 280K, charger, flash, camera 293, etc., respectively through different I2C bus interfaces.
  • the processor 210 may couple the touch sensor 280K through an I2C interface, so that the processor 210 and the touch sensor 280K communicate through the I2C bus interface to implement the touch function of the electronic device 200.
  • the I2S interface can be used for audio communication.
  • the processor 210 may include multiple sets of I2S buses.
  • the processor 210 may be coupled with the audio module 270 through an I2S bus to implement communication between the processor 210 and the audio module 270.
  • the audio module 270 can transmit audio signals to the wireless communication module 260 through the I2S interface, so as to realize the function of answering calls through the Bluetooth headset.
  • the PCM interface can also be used for audio communication to sample, quantize and encode analog signals.
  • the audio module 270 and the wireless communication module 260 may be coupled through a PCM bus interface.
  • the audio module 270 may also transmit audio signals to the wireless communication module 260 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a two-way communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 210 and the wireless communication module 260.
  • the processor 210 communicates with the Bluetooth module in the wireless communication module 260 through the UART interface to realize the Bluetooth function.
  • the audio module 270 may transmit audio signals to the wireless communication module 260 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 210 with the display screen 294, the camera 293 and other peripheral devices.
  • the MIPI interface includes a camera serial interface (camera serial interface, CSI), a display serial interface (display serial interface, DSI), and so on.
  • the processor 210 and the camera 293 communicate through a CSI interface to implement the shooting function of the electronic device 200.
  • the processor 210 and the display screen 294 communicate through a DSI interface to realize the display function of the electronic device 200.
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 210 with the camera 293, the display screen 294, the wireless communication module 260, the audio module 270, the sensor module 280, and so on.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 230 is an interface that complies with the USB standard specifications, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on.
  • the USB interface 230 can be used to connect a charger to charge the electronic device 200, and can also be used to transfer data between the electronic device 200 and peripheral devices. It can also be used to connect earphones and play audio through earphones. This interface can also be used to connect to other electronic devices, such as AR devices.
  • the charging management module 240 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 240 may receive the charging input of the wired charger through the USB interface 230.
  • the charging management module 240 may receive the wireless charging input through the wireless charging coil of the electronic device 200. While the charging management module 240 charges the battery 242, it can also supply power to the electronic device through the power management module 241.
  • the power management module 241 is used to connect the battery 242, the charging management module 240 and the processor 210.
  • the power management module 241 receives input from the battery 242 and/or the charging management module 240, and supplies power to the processor 210, the internal memory 221, the display screen 294, the camera 293, and the wireless communication module 260.
  • the power management module 241 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 241 may also be provided in the processor 210.
  • the power management module 241 and the charging management module 240 may also be provided in the same device.
  • the wireless communication function of the electronic device 200 can be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 200 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 250 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 200.
  • the mobile communication module 250 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), and the like.
  • the mobile communication module 250 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation.
  • the mobile communication module 250 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic wave radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 250 may be provided in the processor 210.
  • at least part of the functional modules of the mobile communication module 250 and at least part of the modules of the processor 210 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing. After the low-frequency baseband signal is processed by the baseband processor, it is passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to a speaker 270A, a receiver 270B, etc.), or displays an image or video through the display screen 294.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 210 and be provided in the same device as the mobile communication module 250 or other functional modules.
  • the wireless communication module 260 can provide applications on the electronic device 200 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellites. System (global navigation satellite system, GNSS), frequency modulation (FM), near field communication (NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • the wireless communication module 260 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 260 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 210.
  • the wireless communication module 260 may also receive a signal to be sent from the processor 210, perform frequency modulation, amplify, and convert it into electromagnetic waves to radiate through the antenna 2.
  • the antenna 1 of the electronic device 200 is coupled with the mobile communication module 250, and the antenna 2 is coupled with the wireless communication module 260, so that the electronic device 200 can communicate with the network and other devices through wireless communication technology.
  • Wireless communication technologies can include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), and broadband code division. Multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM , And/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the electronic device 200 implements a display function through a GPU, a display screen 294, and an application processor.
  • the GPU is an image processing microprocessor, which is connected to the display screen 294 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 210 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 294 is used to display images, videos, and the like.
  • the display screen 294 includes a display panel.
  • the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active-matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • emitting diode AMOLED
  • flexible light-emitting diode FLED
  • Mini LED Micro-LEd
  • Micro-OLED quantum dot light emitting diode
  • QLED quantum dot light emitting diode
  • the electronic device 200 may include one or N display screens 294, and N is a positive integer greater than one.
  • the electronic device 200 may implement a shooting function through an ISP, a camera 293, a video codec, a GPU, a display screen 294, and an application processor.
  • the ISP is used to process the data fed back by the camera 293. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 293.
  • the camera 293 is used to capture still images or videos.
  • the object generates an optical image through the lens and is projected to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transfers the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 200 may include 1 or N cameras 293, and N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 200 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 200 may support one or more video codecs. In this way, the electronic device 200 can play or record videos in multiple encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
  • MPEG moving picture experts group
  • MPEG2 MPEG2, MPEG3, MPEG4, and so on.
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • applications such as intelligent cognition of the electronic device 200 can be realized, such as image recognition, face recognition, voice recognition, text understanding, and so on.
  • the external memory interface 220 may be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 200.
  • the external memory card communicates with the processor 210 through the external memory interface 220 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 221 may be used to store computer executable program code, where the executable program code includes instructions.
  • the internal memory 221 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, at least one application program (such as a sound playback function, an image playback function, etc.) required by at least one function.
  • the data storage area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 200.
  • the internal memory 221 may include a high-speed random access memory, such as a double data rate synchronous dynamic random access memory (DDR), and may also include a non-volatile memory, such as at least one disk storage device, Flash memory devices, universal flash storage (UFS), etc.
  • the processor 210 executes various functional applications and data processing of the electronic device 200 by running instructions stored in the internal memory 221 and/or instructions stored in a memory provided in the processor.
  • the electronic device 200 can implement audio functions through an audio module 270, a speaker 270A, a receiver 270B, a microphone 270C, a headphone interface 270D, and an application processor. For example, music playback, recording, etc.
  • the audio module 270 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 270 can also be used to encode and decode audio signals.
  • the audio module 270 may be provided in the processor 210, or part of the functional modules of the audio module 270 may be provided in the processor 210.
  • the speaker 270A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 200 can listen to music through the speaker 270A, or listen to a hands-free call.
  • the receiver 270B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device 200 answers a call or voice message, it can receive the voice by bringing the receiver 270B close to the human ear.
  • Microphone 270C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 270C through the human mouth, and input the sound signal into the microphone 270C.
  • the electronic device 200 may be provided with at least one microphone 270C.
  • the electronic device 200 may be provided with two microphones 270C, which can implement noise reduction functions in addition to collecting sound signals.
  • the electronic device 200 may also be provided with three, four or more microphones 270C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
  • the earphone interface 270D is used to connect wired earphones.
  • the earphone interface 270D may be a USB interface 230, or a 3.5 mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association
  • the pressure sensor 280A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 280A may be provided on the display screen 294.
  • the capacitive pressure sensor may include at least two parallel plates with conductive materials. When a force is applied to the pressure sensor 280A, the capacitance between the electrodes changes.
  • the electronic device 200 determines the intensity of the pressure according to the change in capacitance. When a touch operation acts on the display screen 294, the electronic device 200 detects the intensity of the touch operation according to the pressure sensor 280A.
  • the electronic device 200 may also calculate the touched position based on the detection signal of the pressure sensor 280A.
  • touch operations that act on the same touch position but have different touch operation strengths may correspond to different operation instructions. For example, when a touch operation whose intensity of the touch operation is less than the first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 280B may be used to determine the movement posture of the electronic device 200.
  • the angular velocity of the electronic device 200 around three axes ie, x, y, and z axes
  • the gyro sensor 280B can be used for image stabilization.
  • the gyroscope sensor 280B detects the shake angle of the electronic device 200, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shake of the electronic device 200 through reverse movement to achieve anti-shake.
  • the gyroscope sensor 280B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 280C is used to measure air pressure. In some embodiments, the electronic device 200 calculates the altitude based on the air pressure value measured by the air pressure sensor 280C to assist positioning and navigation.
  • the magnetic sensor 280D includes a Hall sensor.
  • the electronic device 200 can use the magnetic sensor 22280D to detect the opening and closing of the flip holster.
  • the electronic device 200 can detect the opening and closing of the flip according to the magnetic sensor 280D.
  • features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 280E can detect the magnitude of the acceleration of the electronic device 200 in various directions (generally three axes). When the electronic device 200 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and used in applications such as horizontal and vertical screen switching, pedometers and so on.
  • the electronic device 200 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 200 may use the distance sensor 280F to measure the distance to achieve fast focusing.
  • the proximity light sensor 280G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 200 emits infrared light to the outside through the light emitting diode.
  • the electronic device 200 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 200. When insufficient reflected light is detected, the electronic device 200 may determine that there is no object near the electronic device 200.
  • the electronic device 200 can use the proximity light sensor 280G to detect that the user holds the electronic device 200 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 280G can also be used in leather case mode, and the pocket mode will automatically unlock and lock the screen.
  • the ambient light sensor 280L is used to sense the brightness of the ambient light.
  • the electronic device 200 can adaptively adjust the brightness of the display screen 294 according to the perceived brightness of the ambient light.
  • the ambient light sensor 280L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 280L can also cooperate with the proximity light sensor 280G to detect whether the electronic device 200 is in the pocket to prevent accidental touch.
  • the fingerprint sensor 280H is used to collect fingerprints.
  • the electronic device 200 can use the collected fingerprint characteristics to realize fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, and so on.
  • the temperature sensor 280J is used to detect temperature.
  • the electronic device 200 uses the temperature detected by the temperature sensor 280J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 280J exceeds a threshold value, the electronic device 200 performs a reduction in the performance of the processor located near the temperature sensor 280J, so as to reduce power consumption and implement thermal protection.
  • the electronic device 200 when the temperature is lower than another threshold, the electronic device 200 heats the battery 242 to avoid abnormal shutdown of the electronic device 200 due to low temperature.
  • the electronic device 200 boosts the output voltage of the battery 242 to avoid abnormal shutdown caused by low temperature.
  • the touch sensor 280K is also called “touch device”.
  • the touch sensor 280K may be disposed on the display screen 294, and the touch screen is composed of the touch sensor 280K and the display screen 294, which is also called a “touch screen”.
  • the touch sensor 280K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 294.
  • the touch sensor 280K may also be disposed on the surface of the electronic device 200, which is different from the position of the display screen 294.
  • the bone conduction sensor 280M can acquire vibration signals.
  • the bone conduction sensor 280M can acquire the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 280M can also contact the human pulse and receive the blood pressure pulse signal.
  • the bone conduction sensor 280M may also be provided in the earphone, combined with the bone conduction earphone.
  • the audio module 270 can parse out the voice signal based on the vibration signal of the vibrating bone block of the voice obtained by the bone conduction sensor 280M to realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beating signal obtained by the bone conduction sensor 280M, and realize the heart rate detection function.
  • the electronic device 200 may also adopt different interface connection modes in the above embodiments. For example, some or all of the above multiple sensors are connected to the MCU, and then the AP is connected through the MCU.
  • the button 290 includes a power-on button, a volume button, and so on.
  • the button 290 may be a mechanical button. It can also be a touch button.
  • the electronic device 200 may receive key input, and generate key signal input related to user settings and function control of the electronic device 200.
  • the motor 291 can generate vibration prompts.
  • the motor 291 can be used for incoming call vibration notification, and can also be used for touch vibration feedback.
  • touch operations for different applications can correspond to different vibration feedback effects.
  • Acting on touch operations in different areas of the display screen 294, the motor 291 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminding, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 292 can be an indicator light, which can be used to indicate the charging status, power change, and can also be used to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 295 is used to connect to the SIM card.
  • the SIM card can be inserted into the SIM card interface 295 or pulled out from the SIM card interface 295 to achieve contact and separation with the electronic device 200.
  • the electronic device 200 may support 2 or N SIM card interfaces, and N is a positive integer greater than 2.
  • the SIM card interface 295 may support Nano SIM cards, Micro SIM cards, SIM cards, etc.
  • the same SIM card interface 295 can insert multiple cards at the same time. The types of the multiple cards can be the same or different.
  • the SIM card interface 295 can also be compatible with different types of SIM cards.
  • the SIM card interface 295 may also be compatible with external memory cards.
  • the electronic device 200 interacts with the network through the SIM card to realize functions such as call and data communication.
  • the electronic device 200 adopts an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 200 and cannot be separated from the electronic device 200.
  • the software system of the electronic device 200 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present application takes an Android system with a layered architecture as an example to illustrate the software structure of the electronic device 200 by way of example.
  • the operating system of the electronic device 200 may also be other systems such as the IOS system, which is not limited in the embodiment of the present application.
  • FIG. 3 is a schematic diagram of an electronic device provided by an embodiment of the present application in combination with hardware and software.
  • the electronic device 200 includes an Android system, software, and hardware.
  • the hardware includes a processor 304, such as a GPU and a CPU.
  • the software includes one or more graphics application programs.
  • the software includes a graphics application program 301A and a graphics application program 301B (collectively referred to as a graphics application program 301).
  • the software also includes a rendering engine module 302 and a graphics application program interface (application program). interface, API) layer 303.
  • the graphics application 301 in the software may include a game application, a 3D drawing application, and the like.
  • the number of graphics application programs in FIG. 3 is only used as an exemplary description, and not as a limitation on the electronic device provided in the embodiment of the present application.
  • the rendering engine module 302 includes a scene management module 3021, a renderer module 3022, and a post-processing floodlight effect module 3023.
  • the scene management module 3021 includes a brightness area change judgment module 30211, and the scene management module 3021 has a rendering engine interface;
  • the post-processing flood lighting effect module 3023 includes a large-dimension highlight area flood lighting algorithm module 30231.
  • the graphics API layer 303 includes an open graphics library (openGL for embedded systems, OpenGL ES) interface layer 3031 of an embedded system and a VuIkan interface (a cross-platform drawing application program interface) layer 3032.
  • OpenGL ES is a graphics library designed for embedded devices such as mobile phones, personal digital assistants (PDAs) and game consoles in open systems.
  • the graphics application creates a 3D scene by loading one or more object models.
  • the graphics application 301 loads one or more object models, that is, obtains related data of the one or more object models, and the related data of each object model includes state information of the object model.
  • the graphics application 301 creates a 3D scene (also called a rendering scene) according to the state information of the one or more object models.
  • the state information of the object model may include pose information and surface material information.
  • the pose information of the object model includes the position of the object model, the posture of the object model, and the scaling factor of the object model.
  • the scaling factor of the object model is the ratio of the original length of the object model in each axis to the display length in the corresponding axis.
  • the surface material information of the object model includes: color information of the surface material of the object model and texture information of the surface material.
  • one or more object models in the 3D scene are loaded into the rendering engine module 302 through the rendering engine interface.
  • the renderer module renders an image according to the 3D scene created by the graphics application program.
  • the renderer module 3022 obtains one or more object models to be rendered in the 3D scene according to the camera parameters, and renders the one or more object models to be rendered to obtain an image.
  • the camera parameters include position parameters, attitude parameters, window parameters, and field of view (FOV) parameters of the camera.
  • the image rendered by the renderer module 3022 is an image without a flooding effect.
  • FIG. 2 and FIG. 3 are only examples. In other embodiments, other types of software or hardware may be used.
  • Fig. 6 is a flowchart of an image flood processing method provided by an embodiment of the present application. It can be applied to the electronic device shown in FIG. 2 or FIG. 3 above. As shown in Figure 6, the method includes:
  • Step 601 The electronic device performs brightness filtering processing on the first image.
  • the electronic device performs brightness filtering processing on the first image, that is, removes pixels in the first image whose pixel values are less than the brightness threshold, so as to retain pixels in the first image whose pixel values are greater than or equal to the brightness threshold.
  • the brightness threshold may be a fixed value, that is, the brightness threshold of each image is the same.
  • the brightness threshold may be determined based on the pixel value of the pixel in the first image, and the brightness threshold of different images may be different.
  • the brightness threshold may be determined by the electronic device based on the pixel value of the pixel in the first image, and the brightness threshold may also be determined by another device based on the pixel value of the pixel in the first image and then sent to the electronic device. This embodiment of the application does not do this. limited.
  • the process of determining the brightness threshold based on the pixel value of the pixel in the first image includes: dividing the pixel in the first image into a plurality of pixel value intervals according to the size of the pixel value, and any two pixels in the plurality of pixel value intervals There is no intersection in the value intervals, and the union of the multiple pixel value intervals is the complete set of pixel values.
  • the target pixel value interval from multiple pixel value intervals, the maximum value of the target pixel value interval is the first value, and the minimum value of the target pixel value interval is the second value, then it is satisfied: the pixel value in the first image is less than the first value
  • the sum of the number of pixels in is greater than or equal to the preset number threshold, and the sum of the number of pixels in the first image whose pixel value is less than the second value is less than the preset number threshold.
  • the minimum value (that is, the second value), the maximum value (that is, the first value), the average value, or the middle value of the target pixel value interval is used as the brightness threshold of the first image.
  • the brightness threshold of the first image may also be other pixel values in the target pixel value range.
  • the preset number threshold may be 90% of the total number of pixels in the first image.
  • the above-mentioned brightness threshold is the average value of the target pixel value interval.
  • the total number of pixels of the first image is 1920 ⁇ 1080.
  • Multiple pixel value intervals are: [0, 7], [8, 15], [16, 23], [24, 31], ..., [240, 247], [248, 255], the first
  • the number of pixels in the image divided into the multiple pixel value intervals are: 1280, 3840, 1920, 4800,... 8640, 10368, 2880 in order.
  • Step 602 The electronic device acquires a first brightness area category of the first image, where the first brightness area category includes one or more brightness areas of the first image.
  • the first image may include one or more brightness area classes, and each brightness area class may include one or more different brightness areas in the first image.
  • the brightness area can also be called a highlight area, which refers to an area where the brightness value is greater than or equal to the brightness threshold.
  • the above-mentioned one or more brightness regions may be highlight regions on one or more object models in the first 3D scene.
  • the first brightness area type may be any brightness area type in the first image.
  • Step 603 When the first brightness area class of the first image is the same as the target brightness area class of the second image subjected to flooding processing, the electronic device obtains the second image obtained after Gaussian blur processing is performed on the target brightness area class of the second image. An intermediate image.
  • the first brightness area class of the first image is the same as the target brightness area class of the second image, including:
  • the state information of the object model corresponding to the brightness area in the first brightness area class in the first 3D scene is the same as the state information of the object model corresponding to the brightness area in the target brightness area class in the second 3D scene, and the first 3D
  • the camera parameters of the scene are the same as the camera parameters of the second 3D scene.
  • FIG. 7 is a flowchart of a method for judging whether the first brightness area type of the first image is the same as the target brightness area type of the second image according to an embodiment of the present application. As shown in Figure 7, the process includes:
  • Step 6031 the electronic device obtains the state information of all object models corresponding to the first brightness area class of the first image in the first 3D scene, and the state information of all object models corresponding to the target brightness area class of the second image in the second 3D scene information.
  • the state information of the object model may include pose information and surface material information.
  • the pose information may include the position of the object model, the posture of the object model, and the scaling factor of the object model.
  • the surface material information may include: color information of the surface material of the object model and texture information of the surface material.
  • the first brightness area category of the first image includes a brightness area a, a brightness area b, and a brightness area c, where the brightness area a is the highlight area of the wall A, and the brightness area b is the highlight area of the mountain B , The brightness area c is the highlight area of the sky C.
  • the electronic device acquires the state information of the wall A, the mountain B, and the sky C in the first 3D scene and the second 3D scene.
  • Step 6032 the electronic device determines whether the state information of all object models corresponding to the first brightness area class in the first 3D scene is the same as the state information of all object models corresponding to the target brightness area class in the second 3D scene.
  • step 6033 is executed; when the state information of all object models corresponding to the first brightness area class is the same
  • step 6036 is executed.
  • all object models corresponding to the brightness area category include the object model corresponding to each brightness area in the brightness area category.
  • the brightness area in the first brightness area category is referred to as the first brightness area
  • the brightness area in the target brightness area category is referred to as the second brightness area.
  • the state information of all object models corresponding to the first brightness area class in the first 3D scene is the same as the state information of all object models corresponding to the target brightness area class in the second 3D scene, which refers to the first brightness area in the first brightness area class
  • step 6031 When the state information of wall A in the first 3D scene is the same as the state information of wall A in the second 3D scene, the state of mountain B in the first 3D scene When the information is the same as the state information of the mountain B in the second 3D scene, and the state information of the sky C in the first 3D scene is the same as the state information of the sky C in the second 3D scene, the electronic device determines that the state information is in the first 3D scene
  • the state information of all object models corresponding to the first brightness area class is the same as the state information of all object models corresponding to the target brightness area class in the second 3D scene.
  • the state information of the wall A in the first 3D scene is different from the state information of the wall A in the second 3D scene
  • the state information of the mountain B in the first 3D scene is the same as the state of the mountain B in the second 3D scene
  • the electronic device determines all objects corresponding to the first brightness area class in the first 3D scene
  • the state information of the model is not completely the same as the state information of all object models corresponding to the target brightness area class in the second 3D scene.
  • Step 6033 The electronic device obtains the camera parameters of the first 3D scene and the camera parameters of the second 3D scene.
  • the camera parameters include: pose parameters, window parameters, and FOV parameters of the camera.
  • Step 6034 The electronic device determines whether the camera parameters of the first 3D scene are the same as the camera parameters of the second 3D scene. When the camera parameters of the first 3D scene are the same as the camera parameters of the second 3D scene, step 6035 is executed; when the camera parameters of the first 3D scene are different from the camera parameters of the second 3D scene, step 6036 is executed.
  • Step 6035 The electronic device determines that the first brightness area class is the same as the target brightness area class.
  • the electronic device when the first brightness area type of the first image is the same as the target brightness area type of the second image subjected to flooding processing, the electronic device can directly obtain the Gaussian blur process for the target brightness area.
  • a first intermediate image, and use the first intermediate image as an image after Gaussian blur processing is performed on the first brightness region class of the first image.
  • the electronic device with the image flooding function enabled does not need to perform Gaussian blur processing for the first brightness area category of the first image during the operation process, which reduces the operating load of the electronic device and reduces the electronic device's operating load. Power consumption.
  • Step 6036 The electronic device determines that the first brightness area type is different from the target brightness area type of the second image.
  • the electronic device determines that the first brightness area type changes relative to the target brightness area type, that is, the electronic device determines that the first brightness area type is different from the target brightness area type of the second image.
  • the electronic device when the first brightness area type is different from the target brightness area type, the electronic device performs Gaussian blur processing on the first brightness area type of the first image to obtain the corresponding intermediate image.
  • the order of execution of the above steps 6032 and 6034 can be changed, that is, the electronic device can perform step 6034 first, and then perform step 6032; or, the above steps 6032 and step 6034 can also be performed at the same time. Not limited.
  • the first brightness area category of the first image is the same as the target brightness area category of the second image, and further includes: the number of brightness areas in the first brightness area category is the same as the number of brightness areas in the target brightness area category ;
  • the type of the object model corresponding to the brightness area in the first brightness area class in the first 3D scene is the same as the type of the object model corresponding to the brightness area in the target brightness area class in the second 3D scene.
  • the types of object models may include natural scenery (such as mountains and sky, etc.), buildings, plants, and animals.
  • the above process of determining whether the first brightness area category of the first image is the same as the target brightness area category of the second image subjected to flooding processing may further include: the electronic device determines the number of brightness areas in the first brightness area category and the target brightness Whether the number of brightness regions in the region class is the same, and the type of object model corresponding to the brightness region in the first brightness region class in the first 3D scene and the object corresponding to the brightness region in the target brightness region class in the second 3D scene Are the models of the same type?
  • the electronic device may first determine whether the number of brightness regions in the first brightness region class is the same as the number of brightness regions in the target brightness region class, and whether the brightness region in the first brightness region class is in the first brightness region class.
  • the type of the corresponding object model in a 3D scene is the same as the type of the object model corresponding to the brightness area in the target brightness area class in the second 3D scene; determining the number of brightness areas in the first brightness area class and the target brightness area The number of brightness regions in the class is the same, and the type of the object model corresponding to the brightness region in the first brightness region class in the first 3D scene is the same as that of the object model corresponding to the brightness region in the target brightness region class in the second 3D scene After the types are the same, perform the above steps 6031 to 6036 to improve the judgment efficiency.
  • the electronic device may obtain the target brightness area class of the second image after obtaining the first brightness area class of the first image, and based on The image processing technology compares the first brightness area class and the second brightness area class to determine whether the first brightness area class is the same as the second brightness area class.
  • this method can also be used to determine whether the first brightness area class is the same as the second brightness area class, which is not limited in the embodiment of the present application. .
  • Step 604 The electronic device generates a flooded image of the first image based on the first image and the first intermediate image.
  • the first image includes one or more brightness area classes.
  • the first case when the first image includes a brightness area category, that is, the first image only includes the first brightness area category, the implementation process of step 604 includes: the electronic device performs image fusion processing on the first image and the first intermediate image , Get the floodlight image of the first image.
  • the electronic device when the first image only includes the first brightness area category, and the first brightness area category is the same as the target brightness area category of the second image after flooding processing, the electronic device can obtain the target brightness area category.
  • the brightness area class is a first intermediate image obtained after Gaussian blur processing, and a flooded image of the first image is generated based on the first image and the first intermediate image.
  • the second case when the first image includes multiple brightness area classes, the electronic device can perform the above steps 602 to 604 for each brightness area class in the first image to obtain an intermediate image corresponding to each brightness area class ,
  • the implementation process of step 604 includes: the electronic device performs image fusion processing on the first image and the intermediate images corresponding to the respective brightness regions of the first image to obtain the flooded image of the first image.
  • the embodiment of the present application provides two ways to divide the brightness area classes, and the first image includes two brightness area classes (the first brightness area class and the second brightness area class).
  • the second brightness area category is taken as an example, and the process of performing flood processing on the first image by the electronic device under different division methods will be described.
  • the first brightness area category and the second brightness area category both include one or more brightness areas of the first image, and the brightness area in the first brightness area category is different from the brightness area in the second brightness area category.
  • the first image may also include three, four, or even more brightness area categories, which is not limited in the embodiment of the present application.
  • the brightness area class is divided based on the size of the brightness area.
  • the size of the brightness area in the first brightness area class is larger than the size of the brightness area in the second brightness area class.
  • the size of the brightness area in the first brightness area class is smaller than the size of the brightness area in the second brightness area class.
  • the size of the brightness area in the first brightness area category is larger than the size of the brightness area in the second brightness area category as an example for description.
  • the electronic device may store a preset size threshold.
  • the electronic device classifies the brightness area into a brightness area category; when the ratio of the size of the brightness area to the image size is less than or equal to the size threshold, the electronic device will The brightness area is divided into another brightness area class.
  • the value range of the size threshold may be 3% to 5% of the image size, for example, the size threshold may be 3%, 4%, or 5% of the image size.
  • FIG. 8 is a flowchart of another image flood processing method provided by an embodiment of the present application. As shown in Figure 8, the method includes:
  • Step 801 The electronic device performs brightness filtering processing on the first image.
  • Step 802 The electronic device acquires the first brightness area category of the first image.
  • Step 803 The electronic device judges whether the first brightness area class of the first image is the same as the target brightness area class of the second image subjected to flooding processing; when the first brightness area class is the same as the target brightness area class, execute step 804; when the first brightness area class is different from the target brightness area class, go to step 805.
  • the electronic device determines the target brightness area category of the second image, the second image includes two brightness area categories, and the size of the brightness area in the target brightness area category is equal It is larger than the size of the brightness area in another brightness area class in the second image.
  • Step 804 The electronic device obtains a first intermediate image obtained after Gaussian blur processing is performed on the target brightness area class of the second image, and uses the first intermediate image as an intermediate image corresponding to the first brightness area class.
  • step 803 For the explanation of the foregoing step 803 and step 804, reference may be made to the foregoing step 603, which is not repeated in the embodiment of the present application.
  • step 804 the electronic device performs step 806.
  • Step 805 The electronic device performs Gaussian blur processing on the first brightness area class of the first image to obtain an intermediate image corresponding to the first brightness area class.
  • the Gaussian blur processing for the first brightness area category can be considered It performs Gaussian blurring for the large-size brightness area in the first image.
  • the electronic device can perform a reduction process with a smaller reduction magnification on the resolution of the first image, so as to eliminate pixels in the smaller brightness area (that is, the brightness area of the detail portion) in the first image, and only retain pixels
  • the pixels of a larger brightness area that is, a large-dimension brightness area
  • the process of performing Gaussian blur processing for the first brightness area category of the first image may include: the electronic device first uses the second reduction magnification to perform downsampling processing on the first image to obtain the first brightness area category. 2. Reduce the image. Then, Gaussian blur processing is performed on the second reduced image to obtain an intermediate image corresponding to the first brightness area class.
  • the second reduction magnification may refer to a single reduction magnification, or may refer to a set of multiple reduction magnifications. For example, m can take a value of -4 or -5, and the second reduction magnification k2 includes 1/16 and 1/32.
  • the above process of performing Gaussian blur processing for the first brightness area category of the first image includes: the electronic device uses two different reduction magnifications to perform the Gaussian blurring processing on the first image respectively.
  • the reduced pixel sampling process obtains two reduced images for the first brightness area class.
  • the electronic device performs Gaussian blurring on the two reduced images to obtain two intermediate images.
  • the two reduction magnifications included in the second reduction magnification may be 1/16 and 1/32, respectively.
  • Step 806 The electronic device obtains the second brightness area category of the first image.
  • Step 807 The electronic device determines whether there is a certain brightness area type in the second image subjected to flooding processing which is the same as the second brightness area type of the first image; when a certain brightness area type exists in the second image subjected to flooding processing If it is the same as the second brightness area type of the first image, go to step 808; when any brightness area type in the second image after flooding processing is different from the second brightness area type of the first image, go to step 809 .
  • the electronic device after obtaining the second brightness area class of the first image, obtains a brightness area class of a smaller size in the second image, and determines whether the second brightness area class is the same as the brightness area class of the smaller size .
  • Step 808 The electronic device obtains an intermediate image obtained after Gaussian blur processing is performed on the brightness area class that is the same as the second brightness area class in the second image, and uses the intermediate image as an intermediate image corresponding to the second brightness area class.
  • step 807 and step 808 refer to the foregoing step 603, which is not repeated in the embodiment of the present application.
  • the electronic device After performing step 808, the electronic device performs step 810.
  • Step 809 The electronic device performs Gaussian blur processing on the second brightness area class of the first image to obtain an intermediate image corresponding to the second brightness area class.
  • the intermediate image obtained by the electronic device performing Gaussian blur processing for the second brightness region class of the first image is referred to as the second intermediate image. Since the size of the brightness area in the second brightness area category of the first image is smaller than the size of the brightness area in the first brightness area category, the Gaussian blur processing for the second brightness area category can be considered to be for small areas in the first image. Gaussian blur is performed on the brightness area of the size.
  • the electronic device can perform a reduction process with a larger reduction magnification on the resolution of the first image, and can retain pixels in a smaller brightness area (that is, the brightness area of the detail portion) in the first image, thereby It can realize the flooding processing of the small-sized brightness area of the first image.
  • the electronic device can retain the smaller brightness area in the first image after performing a larger reduction magnification process on the resolution of the first image, a small size convolution check can be used to compare the reduced first image Performing Gaussian blur processing, because the larger the size of the convolution kernel, the higher the complexity of the Gaussian blur processing. Therefore, the embodiment of the present application adopts a small-sized convolution kernel to perform Gaussian blur processing with lower complexity, which makes the image flooding processing The complexity of the process is low.
  • the process of performing Gaussian blur processing for the second brightness area category of the first image may include: the electronic device first uses the first reduction magnification to perform downsampling processing on the first image to obtain the first image for the second brightness area category. Reduce the image. Then, Gaussian blur processing is performed on the first reduced image to obtain a second intermediate image.
  • the first reduction magnification may refer to a single reduction magnification, or may refer to a set of multiple reduction magnifications.
  • n can take a value of -2 or -3
  • the first reduction magnification k1 includes 1/4 and 1/8.
  • the above process of performing Gaussian blur processing for the second brightness area category of the first image includes: the electronic device uses two different reduction magnifications to perform the Gaussian blurring on the first image respectively.
  • the reduced pixel sampling process obtains two reduced images for the second brightness area class.
  • the electronic device performs Gaussian blurring on the two reduced images to obtain two intermediate images.
  • the two reduction magnifications included in the first reduction magnification may be 1/4 and 1/8, respectively.
  • Step 810 The electronic device performs image fusion processing on the first image, the intermediate image corresponding to the first brightness area category, and the intermediate image corresponding to the second brightness area category to obtain a flooded image of the first image.
  • the electronic device may not perform the above steps 807 to 808, that is, after performing the above step 806, the electronic device may directly perform Gaussian blur processing on the second brightness area class of the first image to obtain the second The middle image. In turn, the flooding effect of the first image is ensured.
  • the brightness area category is divided based on the category of the object model corresponding to the brightness area of the image in the 3D scene.
  • the foreground object model usually has the characteristics of being movable, small in size, and high in the frequency of change in continuous multi-frame images.
  • foreground object models include object models such as people or animals. Since the volume of the foreground object model is usually small, it can be considered that the size of the highlight area on the foreground object model is usually small, that is, the highlight area on the foreground object model is mostly small details.
  • Background object models usually have the characteristics of immovability, large volume, and low change frequency in continuous multi-frame images.
  • background object models include object models such as sky, mountains, and buildings. Since the volume of the background object model is usually large, it can be considered that the size of the highlighted area on the background object model is usually large, that is, the highlight area on the background object model is mostly a large-dimensional part with a larger size.
  • FIG. 9 is a schematic diagram of a 3D game interface (ie, a frame of image) provided by an embodiment of the present application.
  • the 3D scene corresponding to the 3D game interface includes at least a character R belonging to a foreground object model, and a house wall P and an illumination lamp Q belonging to a background object model.
  • the light emitted by the illuminating lamp Q illuminates the person R and the wall P of the house.
  • the light irradiated on the person R converges into a fluorescent circle, and a small-sized highlight area M1 (that is, the shadow area on the person R in the figure) is formed on the person R.
  • the light irradiated on the house wall P forms a larger brightness area M2 on the house wall P (the shadow area on the house wall P in the figure).
  • the background object model Since the background object model has the characteristics of low change frequency in the continuous multi-frame image, in the image obtained based on the 3D scene rendering, the background object model has a greater probability of the brightness area on the current frame image.
  • the brightness area on the previous frame of light processing is the same. Therefore, in the embodiment of the present application, before flood processing is performed on the current frame image, it can be determined that the background object model in the 3D scene corresponding to the current frame image is relative to the background object model in the 3D scene corresponding to the previous frame image. Whether it has changed.
  • the obtained intermediate image can reduce the number of Gaussian blur processing on the image, thereby reducing the complexity of the image flooding processing process.
  • the electronic device may use the set of background brightness areas corresponding to all background object models in the image in the 3D scene as the first brightness area category, that is, the first brightness area category includes the image
  • the first brightness-region type may also be referred to as the background type.
  • the set of brightness regions corresponding to the object models in the 3D scene except for the background object models in the image is regarded as the second brightness region class, that is, the second brightness region class includes the brightness regions in the image except for the background class. Brightness areas other than those.
  • Object models other than the background object model include foreground object models.
  • the second brightness area type may also be referred to as the foreground type, and the brightness area in the second brightness area type may be referred to as the foreground type brightness area.
  • FIG. 10 is a flowchart of yet another image flood processing method provided by an embodiment of the present application. As shown in Figure 10, the method includes:
  • Step 1001 The electronic device performs brightness filtering processing on the first image.
  • Step 1002 The electronic device performs Gaussian blur processing on the foreground class of the first image to obtain an intermediate image corresponding to the foreground class of the first image.
  • the intermediate image obtained by the electronic device performing Gaussian blur processing for the foreground class of the first image is referred to as the second intermediate image.
  • the size of the brightness area in the foreground class is usually smaller than the size of the brightness area in the background class.
  • the electronic device performs Gaussian blur processing for the foreground class of the first image refer to the process of performing Gaussian blur processing for the small-sized brightness area in step 809. This is not repeated in the embodiment of the application.
  • Step 1003 The electronic device obtains a background object model in the first 3D scene.
  • the first image is rendered based on the first 3D scene.
  • the image can be judged by judging whether the background object model of the 3D scene corresponding to the image is the same as the background object model of the 3D scene corresponding to the image after flooding processing. Whether the background class of is the same as the background class of the image after flooding. Therefore, the background type of the image acquired by the electronic device can be replaced by: the electronic device acquires the background type object model in the 3D scene corresponding to the image.
  • the process of the electronic device acquiring the background object model in the first 3D scene may include: the electronic device traverses the tags of all the object models in the first 3D scene to acquire all the background object models in the first 3D scene,
  • the first brightness area category includes background brightness areas corresponding to all background object models in the first image. This tag is used to indicate whether the object model is a background object model.
  • each object model in the 3D scene may carry a label, and the label is used to indicate that the object model belongs to a background object model or a foreground object model.
  • the label may be manually labeled, or it may be automatically classified by the electronic device according to the category of the object model. Labels can be represented by numbers, letters, or strings. For example, when the label of the object model is “0”, it indicates that the object model belongs to the background object model; when the label of the object model is “1”, it indicates that the object model belongs to the foreground object model.
  • Step 1004 The electronic device judges whether the background object model in the first 3D scene is the same as the background object model in the second 3D scene; when the background object model in the first 3D scene is the same as the background object model in the second 3D scene , Go to step 1005; when the background object model in the first 3D scene is different from the background object model in the second 3D scene, go to step 1006.
  • the electronic device determines whether the background object model in the first 3D scene is the same as the background object model in the second 3D scene, that is, whether the background object model in the first 3D scene is relative to the background object model in the second 3D scene Changes.
  • Step 1005 The electronic device obtains a first intermediate image obtained after Gaussian blurring is performed on the background class of the second image, and uses the first intermediate image as an intermediate image corresponding to the background class of the first image.
  • the second image is rendered based on the second 3D scene.
  • the electronic device determines that the background object model in the first 3D scene corresponding to the first image is relative to the second 3D corresponding to the second image.
  • the background object model in the scene has not changed, so it can be considered that the background type of the first image is the same as the background type of the second image.
  • the process for the electronic device to determine whether the background object model in the first 3D scene is the same as the background object model in the second 3D scene is the same as that in step 603.
  • the electronic device determines the first brightness area type of the first image.
  • the process of whether the target brightness area class of the second image is the same is the same. Therefore, for the explanation of the above step 1004 and step 1005, please refer to the above step 603, which is not repeated in this embodiment of the application. After performing step 1005, the electronic device performs step 1007.
  • Step 1006 The electronic device performs Gaussian blur processing on the background class of the first image to obtain an intermediate image corresponding to the background class of the first image.
  • the electronic device determines that the background object model in the first 3D scene corresponding to the first image is relative to the second 3D object model corresponding to the second image.
  • the background object model in the scene changes, so it can be considered that the background type of the first image is different from the background type of the second image.
  • the method of performing Gaussian blur processing for the background category of the first image by the electronic device can refer to the above step 805 for large size. The process of Gaussian blur processing in the brightness area will not be repeated in the embodiment of the present application.
  • Step 1007 The electronic device performs image fusion processing on the first image, the intermediate image corresponding to the background class of the first image, and the intermediate image corresponding to the foreground class of the first image, to obtain a flooded image of the first image.
  • step 1003 may be performed before step 1001, or may also be performed at the same time as step 1001.
  • the electronic device determines whether the background object model in the first 3D scene is the same as the background object model in the second 3D scene while obtaining the first image by rendering according to the first 3D scene.
  • the electronic device judges whether the background object model in the first 3D scene is the same as the background object model in the second 3D scene while performing brightness filtering processing on the first image.
  • the electronic device may also determine whether the foreground object model in the first 3D scene is the same as the foreground object model in the second 3D scene, thereby When the foreground object model in the first 3D scene is the same as the foreground object model in the second 3D scene, the electronic device obtains the middle corresponding to the foreground class of the first image obtained after Gaussian blur processing is performed on the foreground class of the second image image.
  • the electronic device executes the above step 1002 to perform Gaussian blur processing on the foreground class of the first image to obtain the corresponding foreground class of the first image The middle image.
  • the foreground object model in the first 3D scene is the same as the foreground object model in the second 3D scene, an intermediate image corresponding to the foreground class of the first image obtained after Gaussian blur processing is performed on the foreground class of the second image is obtained , Without performing Gaussian blur processing for the foreground class of the first image, reducing the number of Gaussian blur processing on the first image, thereby reducing the complexity of the image flooding processing process.
  • FIG. 11 is a schematic flowchart of an electronic device implementing an image flooding processing method according to an embodiment of the present application.
  • the object model in the first 3D scene includes object 1, object 2, object 3, object 4, object 5, and object 6.
  • the tags of the object 1, the object 2, and the object 3 are all background tags, and the background tags are used to indicate that the object model belongs to the background object model.
  • Object 4, object 5, and object 6 have labels that are foreground labels (also called role labels), and the foreground labels are used to indicate that the object model belongs to the foreground object model.
  • All the object models in the first 3D scene are stored in the scene management module of the electronic device.
  • the renderer module renders the first image according to the first 3D scene.
  • Optimize the large-dimension highlight area floodlighting algorithm module to reduce the pixel sampling processing of the first image with a 1/4 reduction magnification, and obtain a reduced image with a resolution of 1/4 times the original resolution (the resolution of the first image) (Referred to as 1/4 reduced image), and then perform Gaussian blur processing on the 1/4 reduced image to obtain an intermediate image corresponding to the foreground class of the first image (referred to as the 1/4 Gaussian blurred result image); and adopt 1/8 reduction
  • the magnification performs reduced pixel sampling processing on the first image to obtain a reduced image with a resolution of 1/8 times the original resolution (referred to as 1/8 reduced image), and then performs Gaussian blur processing on the 1/8 reduced image to obtain the first image
  • the foreground class corresponds to another intermediate image (referred to as 1/8 Gaussian Blur result image).
  • the brightness area change judgment module traverses objects 1 to 6 in the scene management module, determines objects 1 to 3 with background tags, and judges that object 1, object 2, and object 3 in the first 3D scene are different from those in the second 3D scene. Whether the background object models are the same.
  • the large-dimensional highlight area flooding algorithm module is optimized to obtain Gaussian blurring for the second image
  • the intermediate image that is, the 1/16 Gaussian blurred result image and the 1/32 Gaussian blurred result image shown in the figure).
  • the optimizing large-dimension highlight area flooding algorithm module performs Gaussian blur processing for the background type of the first image, Obtain the intermediate image corresponding to the background class of the first image.
  • the process of optimizing the large-dimensional highlight area floodlighting algorithm module to perform Gaussian blur processing for the background class of the first image includes: optimizing the large-dimensional highlight area floodlighting algorithm module to reduce the first image with a reduction magnification of 1/16 Pixel sampling process to obtain a reduced image with a resolution of 1/16 times the original resolution (referred to as 1/16 reduced image), and then Gaussian blurring is performed on the 1/16 reduced image to obtain an intermediate corresponding to the background class of the first image Image (referred to as the 1/16 Gaussian blurred result image); and the first image is reduced and sampled with a reduction ratio of 1/32 to obtain a reduced image with a resolution of 1/32 times the original resolution (referred to as 1/32) Reduced image), and then perform Gaussian blur processing on the 1/32 reduced image to obtain another intermediate image corresponding to the background class of the first image (referred to as 1/32 Gaussian blurred result image).
  • the process of optimizing the large-dimension highlight area floodlighting algorithm module to obtain the intermediate image after Gaussian blur processing for the second image includes: optimizing the large-dimension highlight area floodlighting algorithm module to directly obtain the two intermediate images corresponding to the background class of the second image Image: 1/16 Gaussian Blur result image and 1/32 Gaussian Blur result image.
  • the rendering engine module performs image fusion processing on the first image, the 1/4 Gaussian blurred result image, the 1/8 Gaussian blurred result image, the 1/16 Gaussian blurred result image, and the 1/32 Gaussian blurred result image to obtain the generalized image of the first image. Light image.
  • Table 1 records that: when the electronic device with the image flood processing function is running high-quality game applications (such as a game application with a frame rate of 60), the relevant technology is used.
  • the CPU and GPU load conditions and system power consumption of the provided image flood processing method (referred to as the related technology algorithm), and the CPU and GPU performance when the image flood processing method shown in Figure 11 (referred to as the algorithm of this application) is used Load situation and system power consumption situation.
  • each step in the above-mentioned image flooding processing method may be executed by the same or different modules in the electronic device shown in FIG. 3.
  • the rendering engine module 302 may be used to execute the above-mentioned step 601, step 801, and step 1001.
  • the optimizing large-dimension highlight area flooding algorithm module 30231 can be used to execute the above steps 603, 804, 805, 808, 809, 1002, 1005, and 1006.
  • the brightness area category change judgment module 30211 can be used to execute the above step 602, step 802, step 803, step 806, step 807, step 1003, step 1004, and step 6031 to step 6036.
  • the post-processing floodlight effect module 2023 can be used to perform step 604, step 810, and step 1007.
  • the rendering engine module 302 is also used to optimize the floodlight processing image of the first image generated by the large-dimension highlight area floodlight algorithm module 30231 by calling the OpenGL ES interface layer 3031 or calling the VuIkan interface layer 3032 to present it on the display device of the terminal on.
  • the image flooding processing method can determine that the first brightness area type in the first image is the same as the target brightness area type of the second image after flooding processing.
  • the intermediate image can be obtained directly after Gaussian blur processing is performed on the target brightness area category of the second image, without the need for Gaussian blur processing on the first brightness area category of the first image. Therefore, the embodiment of the present application guarantees the first Under the premise of the flooding effect of the image, the number of Gaussian blur processing on the first image can be reduced, thereby reducing the complexity of the image flooding processing process, thereby reducing the operation of the electronic device after the flooding processing function is turned on The load reduces the power consumption of electronic equipment.
  • the electronic device may also directly when it is determined that a certain brightness area class is the same as the second brightness area class of the first image in the second image after flooding processing.
  • the intermediate image is obtained after Gaussian blur processing is performed on the same brightness region class in the second image as the second brightness region class, without the need for Gaussian blur processing for the second brightness region class of the first image. Therefore, the embodiment of the present application Under the premise of ensuring the flooding effect of the first image, the number of Gaussian blurring processing on the first image can be further reduced, thereby reducing the complexity of the image flooding processing process, thereby reducing the number of electronic devices turning on the flooding processing function After the load during operation, the power consumption of the electronic equipment is reduced.
  • FIG. 12 shows a block diagram of an image flood processing apparatus provided by an embodiment of the present application.
  • the apparatus 1200 may include:
  • the first obtaining module 1201 is configured to obtain a first brightness area category of a first image, and the first brightness area category includes one or more brightness areas of the first image.
  • the second acquisition module 1202 is configured to acquire the target brightness region class of the second image for Gaussian blur processing after determining that the first brightness region class of the first image is the same as the target brightness region class of the second image subjected to flooding processing The first intermediate image obtained afterwards.
  • the generating module 1203 is configured to generate a flooded image of the first image based on the first image and the first intermediate image.
  • the first image is rendered based on the first three-dimensional scene
  • the second image is rendered based on the second three-dimensional scene
  • the first brightness area class is the same as the target brightness area class, including:
  • the state information of the object model corresponding to the brightness area in the first brightness area class in the first three-dimensional scene is the same as the state information of the object model corresponding to the brightness area in the target brightness area class in the second three-dimensional scene, and the first three-dimensional
  • the camera parameters of the scene are the same as the camera parameters of the second three-dimensional scene.
  • the apparatus 1200 may further include:
  • the third acquiring module 1204 is configured to acquire state information of all object models corresponding to the first brightness area class in the first three-dimensional scene, and state information of all object models corresponding to the target brightness area class in the second three-dimensional scene.
  • the fourth acquiring module 1205 is configured to acquire after determining that the state information of all object models corresponding to the first brightness area class in the first three-dimensional scene is the same as the state information of all object models corresponding to the target brightness area class in the second three-dimensional scene The camera parameters of the first three-dimensional scene and the camera parameters of the second three-dimensional scene.
  • the determining module 1206 determines that the camera parameters used in the first three-dimensional scene are the same as the camera parameters of the second three-dimensional scene, and determine that the first brightness area class is the same as the target brightness area class.
  • the state information of the object model includes: pose information and surface material information of the object model
  • the camera parameters include: pose parameters of the camera, window parameters, and field of view parameters.
  • the first obtaining module 1201 is configured to: traverse the tags of all object models in the first three-dimensional scene, where the tags are used to indicate whether the object model is a background object model; to obtain all background objects in the first three-dimensional scene Model, the first brightness area category includes background brightness areas corresponding to all background object models in the first image.
  • the first image further includes a second brightness area
  • the second brightness area category includes other brightness areas in the first image except for the background brightness area.
  • the apparatus 1200 may further include:
  • the Gaussian blur processing module 1207 is configured to perform Gaussian blur processing for the second brightness area class to obtain a second intermediate image.
  • the generating module 1203 is configured to perform image fusion processing on the first image, the first intermediate image, and the second intermediate image to obtain a flooded image of the first image.
  • the first image further includes a second brightness area category, and the second brightness area category includes one or more brightness areas of the first image.
  • the Gaussian blur processing module 1207 is used to determine the first image After the second brightness region class of is different from any brightness region class of the second image, Gaussian blur processing is performed on the second brightness region class to obtain a second intermediate image.
  • the generating module is used to perform image fusion processing on the first image, the first intermediate image, and the second intermediate image to obtain the flooded image of the first image.
  • the size of the brightness area in the first brightness area class is larger than the size of the brightness area in the second brightness area class
  • the Gaussian blur processing module 1207 is used to: use the first reduction magnification to reduce the pixels of the first image Sampling processing to obtain a first reduced image; Gaussian blur processing is performed on the first reduced image to obtain a second intermediate image;
  • the size of the brightness area in the first brightness area class is smaller than the size of the brightness area in the second brightness area class
  • the Gaussian blur processing module 1207 is configured to: use the second reduction magnification to perform down-sampling processing on the first image , Obtain the second reduced image; perform Gaussian blur processing on the second reduced image to obtain the second intermediate image.
  • the first reduction magnification is greater than the second reduction magnification.
  • the apparatus 1200 may further include:
  • the filter processing module 1208 is configured to perform brightness filter processing on the first image before generating the flooded image of the first image based on the first image and the first intermediate image.
  • the first image and the second image are two consecutive frames of images, and the second image is the previous frame of the first image.
  • the image flooding processing device provided by the embodiment of the present application can determine that the first brightness area type in the first image is the same as the target brightness area type of the second image after flooding processing.
  • the intermediate image is obtained directly after Gaussian blur processing is performed on the target brightness area category of the second image, without the need for Gaussian blur processing on the first brightness area category of the first image. Therefore, the embodiments of the present application ensure the quality of the first image. Under the premise of flooding effect, the number of times of Gaussian blur processing on the first image can be reduced, thereby reducing the complexity of the image flooding processing process, thereby reducing the running process of the image flooding processing device after the flooding processing function is turned on In the load, the power consumption of the image flood processing device is reduced.
  • the image flooding processing device may also determine that a certain brightness area class is similar to the second brightness area class of the first image in the second image after the flooding process.
  • the intermediate image is obtained directly after Gaussian blur processing is performed on the same brightness region class in the second image as the second brightness region class, without the need for Gaussian blur processing for the second brightness region class of the first image. Therefore, this Under the premise of ensuring the flooding effect of the first image, the application embodiment can further reduce the number of Gaussian blur processing on the first image, thereby reducing the complexity of the image flooding processing process, thereby reducing the image flooding processing device
  • the load during operation after the floodlight processing function is turned on reduces the power consumption of the image floodlight processing device.
  • each module in the above device can be implemented by software or a combination of software and hardware.
  • the hardware may be a logic integrated circuit module, which may specifically include a transistor, a logic gate array, or an arithmetic logic circuit.
  • the software exists in the form of a computer program product and is stored in a computer-readable storage medium. The software can be executed by a processor. Therefore, alternatively, the image flood processing device may be implemented by a processor executing a software program, which is not limited in this embodiment.
  • the embodiment of the present application also provides an image flooding processing device.
  • the device includes a processor 1601 and a memory 1602; when the processor 1601 executes the computer program stored in the memory 1602, the image flooding processing device executes The image flood processing method provided by the embodiment of the present application.
  • the image flooding processing device may be deployed in the terminal.
  • the device further includes a communication interface 1603 and a bus 1604.
  • the processor 1601, the memory 1602, and the communication interface 1603 are communicatively connected through the bus 1604.
  • the embodiment of the present application also provides a storage medium.
  • the storage medium may be a non-volatile computer-readable storage medium.
  • a computer program is stored in the storage medium.
  • the computer program instructs the processing component to execute any of the The image flood processing method.
  • the storage medium may include: read-only memory (ROM) or random access memory (RAM), magnetic disks or optical disks, and other media that can store program codes.
  • the embodiment of the present application also provides a computer program product containing instructions.
  • the computer program product runs on a computer, the computer executes the image flooding processing method provided by the embodiment of the present application.
  • the computer program product may include one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted through the computer-readable storage medium.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).
  • the embodiments of the present application also provide a chip, such as a CPU chip, which includes one or more physical cores and a storage medium.
  • the one or more physical cores are implemented after reading computer instructions in the storage medium.
  • the aforementioned image flooding processing method can implement the aforementioned image flooding processing method in pure hardware or a combination of software and hardware, that is, the chip includes a logic circuit, and the logic circuit is used to implement the implementation of the application when the chip is running.
  • the logic circuit may be a programmable logic circuit.
  • the GPU can also be implemented like a CPU.
  • the program can be stored in a computer-readable storage medium.
  • the storage medium mentioned can be a read-only memory, a magnetic disk or an optical disk, etc.
  • the terms “first”, “second” and “third” are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance.
  • the term “at least one” refers to one or more, and the term “plurality” refers to two or more, unless expressly defined otherwise.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

本申请公开了一种图像泛光处理方法及装置、存储介质,属于图像处理技术领域。电子设备获取第一图像的第一亮度区域类,该第一亮度区域类中包括第一图像的一个或多个亮度区域;电子设备在确定第一图像的第一亮度区域类与经过泛光处理的第二图像的目标亮度区域类相同后,获取针对第二图像的目标亮度区域类进行高斯模糊处理后得到的第一中间图像;电子设备基于第一图像以及第一中间图像,生成第一图像的泛光图像。本申请可以降低电子设备在开启泛光处理功能后在运行过程中的负载,降低电子设备的功耗。

Description

图像泛光处理方法及装置、存储介质
本申请要求于2019年10月23日提交的申请号为201911014260.3、发明名称为“图像泛光处理方法及装置、存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,特别涉及一种图像泛光处理方法及装置、存储介质。
背景技术
泛光是一种常见的光学现象,一般指物理相机拍摄亮度较高的物体时出现的光晕溢出现象。对图像进行泛光处理可以从视觉上提高图像的对比度,增强图像的表现力,达到较好的渲染效果。随着图像处理技术的发展,图像泛光处理已广泛应用于三维游戏和动画制作等领域。
目前,开启图像泛光处理功能的电子设备对图像进行泛光处理的过程包括:首先对原始图像进行亮度滤波处理,剔除原始图像中像素值低于亮度阈值的像素,得到滤波图像。然后分别采用1/4、1/8、1/16和1/32的缩减倍率对滤波图像进行缩减像素采样处理,得到分辨率分别为原始图像的1/4倍、1/8倍、1/16倍和1/32倍的四个低分辨率图像。再对该四个低分辨率图像分别进行高斯模糊处理。最后对原始图像以及经过高斯模糊处理的四个低分辨率图像进行图像融合处理,得到该原始图像的泛光图像。其中,该原始图像包括小尺寸的亮度区域和大尺寸的亮度区域共两个部分,对分辨率分别为原始图像的1/4倍和1/8倍的低分辨率图像进行高斯模糊处理,主要用于实现对原始图像上的小尺寸的亮度区域的泛光处理,使得原始图像上小尺寸的亮度区域达到泛光效果;对分辨率分别为原始图像的1/16倍和1/32倍的低分辨率图像进行高斯模糊处理,主要用于实现对原始图像上的大尺寸的亮度区域的泛光处理,使得图像上大尺寸的亮度区域达到泛光效果。
但是,开启图像泛光处理功能的电子设备会对每帧原始图像分别执行上述泛光处理过程,由于一次泛光处理过程中包括针对原始图像中不同尺寸的亮度区域执行的多次高斯模糊处理,而高斯模糊处理的运算复杂度较高,因此目前开启图像泛光处理功能的电子设备在运行过程中负载较大且功耗较高。
发明内容
本申请提供了一种图像泛光处理方法及装置、存储介质等,以降低开启图像泛光处理功能的电子设备在运行过程中负载和功耗。
下面通过不同的方面介绍本申请。应理解的是,以下不同方面的实现方式和有益效果可互相参考。
本申请中出现的“第一”和“第二”仅为了区分两个对象,并没有先后顺序的意思。
第一方面,提供了一种图像泛光处理方法。该方法包括:电子设备获取第一图像的第一 亮度区域类,第一亮度区域类中包括第一图像的一个或多个亮度区域。在确定第一图像的第一亮度区域类与经过泛光处理的第二图像的目标亮度区域类相同后,电子设备获取针对第二图像的目标亮度区域类进行高斯模糊处理后得到的第一中间图像。电子设备基于第一图像以及第一中间图像,生成第一图像的泛光图像。
可选地,泛光处理的对象可以是基于三维景渲染得到的图像;也可以是物理相机拍摄某一场景后直接生成的图像。本申请中,以第一图像和第二图像均为基于三维场景渲染得到的图像为例进行说明。其中,第一图像基于第一三维场景渲染得到,第二图像基于第二三维场景渲染得到。可选地,第一图像和第二图像可以为连续的两帧图像,即第二图像为第一图像的前一帧图像。这样,电子设备仅需存储当前显示的图像的前一帧图像,既可以完成对第一图像的第一亮度区域类与第二图像的目标亮度区域类的判断操作,且电子设备所需存储的图像的数量较少,可以保证电子设备的存储性能。
由于本申请中,电子设备在确定第一图像的第一亮度区域类与经过泛光处理的第二图像的目标亮度区域类相同时,可以直接获取针对该第二图像的目标亮度区域类进行高斯模糊处理后得到的中间图像,而无需针对第一图像的第一亮度区域类进行高斯模糊处理,因此,本申请在保证第一图像的泛光效果的前提下,可以减少对第一图像进行高斯模糊处理的次数,进而降低了图像泛光处理过程的复杂度,从而降低了电子设备在开启泛光处理功能后在运行过程中的负载,降低了电子设备的功耗。
可选地,当第一图像基于第一三维场景渲染得到,第二图像基于第二三维场景渲染得到,则第一图像的第一亮度区域类与第二图像的目标亮度区域类相同,包括:第一亮度区域类中的亮度区域在第一三维场景中对应的物体模型的状态信息与目标亮度区域类中的亮度区域在第二三维场景中对应的物体模型的状态信息相同,且第一三维场景的相机参数与第二三维场景的相机参数相同。
在一些实现方式中,电子设备判断第一图像的第一亮度区域类与第二图像的目标亮度区域类是否相同的过程可以包括:
电子设备获取第一三维场景中第一亮度区域类对应的所有物体模型的状态信息,以及第二三维场景中目标亮度区域类对应的所有物体模型的状态信息。在确定第一三维场景中第一亮度区域类对应的所有物体模型的状态信息与第二三维场景中目标亮度区域类对应的所有物体模型的状态信息相同后,电子设备获取第一三维场景的相机参数以及第二三维场景的相机参数。在确定第一三维场景的相机参数与第二三维场景的相机参数相同后,电子设备确定第一亮度区域类与目标亮度区域类相同。
可选地,物体模型的状态信息可以包括位姿信息和表面材质信息。其中,该位姿信息可以包括物体模型的位置,物体模型的姿态,以及物体模型的缩放系数。该表面材质信息可以包括:物体模型的表面材质的颜色信息以及表面材质的贴图信息。相机参数包括:相机的位姿参数、视窗参数和视场角参数。
上述第一3D场景中第一亮度区域类对应的所有物体模型的状态信息与第二3D场景中目标亮度区域类对应的所有物体模型的状态信息相同,指第一亮度区域类中的第一亮度区域与目标亮度区域类中的第二亮度区域一一对应,且每个第一亮度区域对应的物体模型的状态信息与对应的第二亮度区域对应的物体模型的状态信息相同。
本申请中,当第一图像的第一亮度区域类与经过泛光处理的第二图像的目标亮度区域类 相同时,电子设备可以直接获取针对该目标亮度区域进行高斯模糊处理后得到的第一中间图像,并将该第一中间图像作为针对第一图像的第一亮度区域类进行高斯模糊处理后的图像。此时,开启图像泛光处理功能的电子设备在运行过程中,无需针对该第一图像的第一亮度区域类进行高斯模糊处理,降低了该电子设备的运行负载,且降低了该电子设备的功耗。
可选地,第一图像还包括第二亮度区域,第二亮度区域类包括第一图像中除背景类亮度区域以外的其它亮度区域。
在一些实现方式中,电子设备在确定第一图像的第二亮度区域类与第二图像的任一亮度区域类不同后,还可以针对第二亮度区域类进行高斯模糊处理,得到第二中间图像。则电子设备基于第一图像以及第一中间图像,生成第一图像的泛光图像的过程包括:电子设备对第一图像、第一中间图像以及第二中间图像进行图像融合处理,得到第一图像的泛光图像。
其中,电子设备可以先判断经过泛光处理的第二图像中是否存在某个亮度区域类与第一图像的第二亮度区域类相同,在确定第一图像的第二亮度区域类与第二图像的任一亮度区域类不同后,针对第二亮度区域类进行高斯模糊处理,得到第二中间图像。其中,电子设备可以先判断经过泛光处理的第二图像中是否存在某个亮度区域类与第一图像的第二亮度区域类相同的过程可以参考上述电子设备判断第一图像的第一亮度区域类与经过泛光处理的第二图像的目标亮度区域类是否相同的过程,本申请对此不做赘述。
可选地,当第一亮度区域类中的亮度区域的尺寸均大于第二亮度区域类中的亮度区域的尺寸,则电子设备针对第一图像的第二亮度区域类进行高斯模糊处理的过程,包括:采用第一缩减倍率对第一图像进行缩减像素采样处理,得到第一缩减图像。对第一缩减图像进行高斯模糊处理,得到第二中间图像。或者,当第一亮度区域类中的亮度区域的尺寸均小于第二亮度区域类中的亮度区域的尺寸,则电子设备针对第一图像的第二亮度区域类进行高斯模糊处理的过程,包括:采用第二缩减倍率对第一图像进行缩减像素采样处理,得到第二缩减图像。对第二缩减图像进行高斯模糊处理,得到第二中间图像。其中,第一缩减倍率大于第二缩减倍率。
可选地,第一缩减倍率k1满足:k1=2 m,m为整数,且-3≤n≤0。本申请中,第一缩减倍率可以指单一的缩减倍率,也可以指多个缩减倍率的集合。例如n可取值-2或-3,则第一缩减倍率k1包括1/4和1/8。第二缩减倍率k2可以满足:k2=2 m,m为整数,且m<-3。本申请中,第二缩减倍率可以指单一的缩减倍率,也可以指多个缩减倍率的集合。例如m可取值-4或-5,则第二缩减倍率k2包括1/16和1/32。
可见,当第一图像包括多个亮度区域类时,电子设备还可以在确定经过泛光处理后的第二图像中存在某个亮度区域类与第一图像的第二亮度区域类相同时,直接获取针对第二图像中与第二亮度区域类相同的亮度区域类进行高斯模糊处理后得到中间图像,而无需针对第一图像的第二亮度区域类进行的高斯模糊处理,因此,本申请在保证第一图像的泛光效果的前提下,可以进一步减少对第一图像进行高斯模糊处理的次数,进而降低了图像泛光处理过程的复杂度,从而降低了电子设备在开启泛光处理功能后在运行过程中的负载,减少了电子设备的功耗。
在一些实现方式中,电子设备还可以针对第二亮度区域类进行高斯模糊处理,得到第二中间图像。则电子设备基于第一图像以及第一中间图像,生成第一图像的泛光图像的过程包括:电子设备对第一图像、第一中间图像以及第二中间图像进行图像融合处理,得到第一图 像的泛光图像。
可选地,电子设备获取第一图像的第一亮度区域类的过程可以包括:遍历第一三维场景中的所有物体模型的标签,该标签用于指示物体模型是否为背景类物体模型。获取第一三维场景中的所有背景类物体模型,第一亮度区域类包括所有背景类物体模型在第一图像中对应的背景类亮度区域。
可选地,在基于第一图像以及第一中间图像,生成第一图像的泛光图像之前,电子设备还可以对第一图像进行亮度滤波处理。
其中,电子设备对第一图像进行亮度滤波处理,即剔除该第一图像中像素值小于亮度阈值的像素,以保留该第一图像中像素值大于或等于该亮度阈值的像素。
第二方面,提供了一种图像泛光处理装置。该装置包括多个功能模块,该多个功能模块相互作用,实现上述第一方面及其各实现方式中的方法。该多个功能模块可以基于软件、硬件或软件和硬件的结合实现,且该多个功能模块可以基于具体实现进行任意组合或分割。
第三方面,提供了一种图像泛光处理装置,例如终端。该图像泛光处理装置包括处理器和存储器。处理器通常包括CPU和GPU。存储器用于存储计算机程序;CPU用于执行存储器存储的计算机程序时实现前述第一方面任意一种图像泛光处理方法。这两种类型的处理器可以为两个芯片,也可以集成在同一块芯片上。
第四方面,提供了一种存储介质,该存储介质可以是非易失性的。该存储介质内存储有计算机程序,计算机程序在被处理组件执行时使得处理组件实现前述第一方面任意一种图像泛光处理方法。
第五方面,提供了一种包含计算机可读指令的计算机程序或计算机程序产品,当计算机程序或计算机程序产品在计算机上运行时,使得计算机执行前述第一方面任意一种图像泛光处理方法。该计算机程序产品中可以包括一个或多个程序单元,用于实现前述第一方面任意一种图像泛光处理方法。
第六方面,提供了一种芯片,例如CPU。芯片包括逻辑电路,逻辑电路可以为可编程逻辑电路。当芯片运行时用于实现前述第一方面任意一种图像泛光处理方法。
第七方面,提供了一种芯片,例如CPU。芯片包括一个或多个物理核、以及存储介质,一个或多个物理核在读取存储介质中的计算机指令后实现前述第一方面任意一种图像泛光处理方法。
综上所述,本申请提供的图像泛光处理方法,由于电子设备可以在确定第一图像中的第一亮度区域类与经过泛光处理后的第二图像的目标亮度区域类相同时,可以直接获取针对第二图像的目标亮度区域类进行高斯模糊处理后得到中间图像,而无需针对第一图像的第一亮度区域类进行的高斯模糊处理,因此,本申请在保证第一图像的泛光效果的前提下,可以减少对第一图像进行高斯模糊处理的次数,进而降低了图像泛光处理过程的复杂度,从而降低了电子设备在开启泛光处理功能后在运行过程中的负载,减少了电子设备的功耗。
另外,本申请还具有前述各个方面中所提到的效果以及其它可推导出的技术效果,在此不再赘述。
附图说明
图1是本申请实施例提供的一种拍摄场景中的泛光示意图;
图2是本申请实施例提供的一种图像泛光处理方法涉及的电子设备的结构示意图;
图3是本申请实施例结合硬件和软件提供的一种电子设备的形态示意图;
图4是本申请实施例提供的一种创建得到3D场景的流程示意图;
图5是本申请实施例提供的一种根据图形应用程序创建得到的3D场景渲染得到图像的流程示意图;
图6是本申请实施例提供的一种图像泛光处理方法的流程图;
图7是本申请实施例提供的一种判断第一图像的第一亮度区域类与第二图像的目标亮度区域类是否相同的方法流程图;
图8是本申请实施例提供的另一种图像泛光处理方法的流程图;
图9是本申请实施例提供的一个3D游戏界面的示意图;
图10是本申请实施例提供的又一种图像泛光处理方法的流程图;
图11是本申请实施例提供的一种电子设备实现图像泛光处理方法的流程示意图;
图12是本申请实施例提供的一种图像泛光处理装置的框图;
图13是本申请实施例提供的另一种图像泛光处理装置的框图;
图14是本申请实施例提供的又一种图像泛光处理装置的框图;
图15是本申请实施例提供的再一种图像泛光处理装置的框图;
图16是本申请实施例提供的又一种图像泛光处理装置的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
随着计算机图形硬件的发展,游戏和影视等领域中对图像呈现的效果要求越来越高。采用3A级别规格制作的游戏(例如三维(three dimensional,3D)游戏)和影视(例如动画)中图像的呈现效果越来越接近物理相机的拍摄效果。目前,可以通过对图像进行图像后处理,使图像的呈现效果接近物理相机的拍摄效果。
为便于理解,下面对本申请实施例中涉及的名词进行解释。
图像后处理:指对图像进行优化处理的过程,主要用于实现图像抗锯齿、高动态范围图像(high dynamic range,HDR)和泛光等特性的增强。图像后处理技术包括泛光处理、抗锯齿处理、运动模糊处理和景深处理等图像处理技术。图像后处理技术在一定程度上可以认为是类似于PS(photoshop)的滤镜处理技术。图像后处理的对象可以是基于三维场景渲染得到的图像。
泛光:是一种常见的光学现象。由于物理相机(也即是真实的摄像机或照相机)在拍摄画面时通常无法完美聚焦,因此,光线通过该物理相机的镜头成像的过程中,会在物体的边缘产生衍射,出现光晕溢出现象。泛光在亮度较低(弱光)的场景中并不容易察觉,但是在亮度较高(强光)场景中较为明显。因此,泛光一般指物理相机拍摄亮度较高的物体时出现的光晕溢出现象。示例地,图1是本申请实施例提供的一种拍摄场景中的泛光示意图。如图1所示,拍摄场景中有一个明亮的窗户。对于室内场景,由于物体亮度较低,因此物理相机拍摄的室内物体具有清晰的轮廓。对于室外场景,由于窗外的太阳亮度较高,因此太阳发射的光线会超出太阳自身轮廓,在其自身轮廓周围产生模糊效果,即出现泛光现象。
泛光效果:在计算机图形学中,泛光效果又称高光,是用于视频游戏、演示动画和HDR中的一种计算机图形效果。泛光效果会在高亮度物体周围产生条纹或羽毛状的光芒,以模糊图像细节,即模仿物理相机成像过程中的泛光现象,使电子设备渲染的图像显得更加真实。
通过对图像进行泛光处理可以使图像具有泛光效果,从而从视觉上提高图像的对比度,增强图像的表现力和图像真实性,达到较好的渲染效果。可选地,泛光处理的对象可以是基于3D场景渲染得到的图像;也可以是物理相机拍摄某一场景后直接生成的图像。本申请实施例中,均以泛光处理的对象为基于3D场景渲染得到的图像为例进行说明。
目前开启图像泛光处理功能的电子设备对图像进行泛光处理的过程包括:首先对原始图像进行亮度滤波处理,剔除原始图像中像素值低于亮度阈值的像素,得到滤波图像。然后分别采用1/4、1/8、1/16和1/32的缩减倍率对滤波图像进行缩减像素采样处理,得到分辨率分别为原始图像的1/4倍、1/8倍、1/16倍和1/32倍的四个低分辨率图像。再对该四个低分辨率图像分别进行高斯模糊处理。最后对原始图像以及经过高斯模糊处理的四个低分辨率图像进行图像融合处理,得到该原始图像的泛光图像。但是,开启图像泛光处理功能的电子设备对每帧原始图像分别执行上述泛光处理过程,由于一次泛光处理过程中包括针对原始图像中不同尺寸的亮度区域执行的多次高斯模糊处理,而高斯模糊处理的运算复杂度较高,因此目前开启图像泛光处理功能的电子设备在运行过程中负载较大且功耗较高。
本申请实施例提供了一种图像泛光处理方法,开启图像泛光处理功能的电子设备对第一图像进行泛光处理时,可以获取第一图像的第一亮度区域类,当第一图像的第一亮度区域类与经过泛光处理的第二图像的目标亮度区域类相同时,获取针对第二图像的目标亮度区域类进行高斯模糊处理后得到的第一中间图像,并基于第一图像以及第一中间图像,生成第一图像的泛光图像。由于本申请实施例中,电子设备在确定第一图像的第一亮度区域类与经过泛光处理的第二图像的目标亮度区域类相同时,可以直接获取针对该第二图像的目标亮度区域类进行高斯模糊处理后得到的中间图像,而无需针对第一图像的第一亮度区域类进行高斯模糊处理,因此,本申请实施例在保证第一图像的泛光效果的前提下,可以减少对第一图像进行高斯模糊处理的次数,进而降低了图像泛光处理过程的复杂度,从而降低了电子设备在开启泛光处理功能后在运行过程中的负载,降低了电子设备的功耗。
本申请实施例以第一图像和第二图像均为基于3D场景渲染得到的图像为例进行说明。其中,第一图像基于第一3D场景渲染得到,第二图像基于第二3D场景渲染得到。第二图像为电子设备在显示第一图像之前显示的图像。可选地,第一图像和第二图像可以为连续的两帧图像,即第二图像为第一图像的前一帧图像。这样,电子设备仅需存储当前显示的图像的前一帧图像,既可以完成对第一图像的第一亮度区域类与第二图像的目标亮度区域类的判断操作,且电子设备所需存储的图像的数量较少,可以保证电子设备的存储性能。
图2是本申请实施例提供的一种图像泛光处理方法涉及的电子设备200的结构示意图。该电子设备200可以但不限于是膝上型计算机、台式计算机、移动电话、智能手机、平板电脑、多媒体播放器、电子阅读器、智能车载设备、智能家电、人工智能设备、穿戴式设备、物联网设备、或虚拟现实/增强现实/混合现实设备等。
电子设备200可以包括处理器210,外部存储器接口220,内部存储器221,通用串行总线(universal serial bus,USB)接口230,充电管理模块240,电源管理模块241,电池242, 天线1,天线2,移动通信模块250,无线通信模块260,音频模块270,扬声器270A,受话器270B,麦克风270C,耳机接口270D,传感器模块280,按键290,马达291,指示器292,摄像头293,显示屏294,以及用户标识模块(subscriber identification module,SIM)卡接口295等。其中传感器模块280可以包括压力传感器280A,陀螺仪传感器280B,气压传感器280C,磁传感器280D,加速度传感器280E,距离传感器280F,接近光传感器280G,指纹传感器280H,温度传感器280J,触摸传感器280K,环境光传感器280L,骨传导传感器280M等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备200的具体限定。在本申请另一些实施例中,电子设备200可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备200的结构限定。在本申请另一些实施例中,电子设备200也可以采用上述实施例中不同的接口连接方式(例如总线连接方式),或多种接口连接方式的组合。
处理器210可以包括一个或多个处理单元,例如包括中央处理器(central processing unit,CPU)(例如应用处理器(application processor,AP)),图形处理器(graphics processing unit,GPU),进一步的,还可以包括调制解调处理器,图像信号处理器(image signal processor,ISP),微控制器单元(microcontroller unit,MCU),视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
处理器210中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器210中的存储器为高速缓冲存储器。该存储器可以保存处理器210刚用过或循环使用的指令或数据。如果处理器210需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器210的等待时间,因而提高了系统的效率。
在一些实施例中,处理器210可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器210可以包含多组I2C总线。处理器210可以通过不同的I2C总线接口分别耦合触摸传感器280K,充电器,闪光灯,摄像头293等。例如:处理器210可以通过I2C接口耦合触摸传感器280K,使处理器210与触摸传感器280K通过I2C总线接口通信,实现电子设备200的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器210可以包含多组I2S总线。处理器210可以通过I2S总线与音频模块270耦合,实现处理器210与音频模块270之间的通信。在一些实施例中,音频模块270可以通过I2S接口向无线通信模块260传递音频信号,实现 通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块270与无线通信模块260可以通过PCM总线接口耦合。在一些实施例中,音频模块270也可以通过PCM接口向无线通信模块260传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器210与无线通信模块260。例如:处理器210通过UART接口与无线通信模块260中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块270可以通过UART接口向无线通信模块260传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器210与显示屏294,摄像头293等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器210和摄像头293通过CSI接口通信,实现电子设备200的拍摄功能。处理器210和显示屏294通过DSI接口通信,实现电子设备200的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器210与摄像头293,显示屏294,无线通信模块260,音频模块270,传感器模块280等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口230是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口230可以用于连接充电器为电子设备200充电,也可以用于电子设备200与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
充电管理模块240用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块240可以通过USB接口230接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块240可以通过电子设备200的无线充电线圈接收无线充电输入。充电管理模块240为电池242充电的同时,还可以通过电源管理模块241为电子设备供电。
电源管理模块241用于连接电池242,充电管理模块240与处理器210。电源管理模块241接收电池242和/或充电管理模块240的输入,为处理器210,内部存储器221,显示屏294,摄像头293,和无线通信模块260等供电。电源管理模块241还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块241也可以设置于处理器210中。在另一些实施例中,电源管理模块241和充电管理模块240也可以设置于同一个器件中。
电子设备200的无线通信功能可以通过天线1,天线2,移动通信模块250,无线通信模块260,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备200中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块250可以提供应用在电子设备200上的包括2G/3G/4G/5G等无线通信的解 决方案。移动通信模块250可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块250可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块250还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块250的至少部分功能模块可以被设置于处理器210中。在一些实施例中,移动通信模块250的至少部分功能模块可以与处理器210的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器270A,受话器270B等)输出声音信号,或通过显示屏294显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器210,与移动通信模块250或其他功能模块设置在同一个器件中。
无线通信模块260可以提供应用在电子设备200上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块260可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块260经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器210。无线通信模块260还可以从处理器210接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备200的天线1和移动通信模块250耦合,天线2和无线通信模块260耦合,使得电子设备200可以通过无线通信技术与网络以及其他设备通信。无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备200通过GPU,显示屏294,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏294和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器210可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏294用于显示图像,视频等。显示屏294包括显示面板。显示面板可以采用液晶显示屏(iquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Mini LED,Micro-LEd, Micro-OLED,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备200可以包括1个或N个显示屏294,N为大于1的正整数。
电子设备200可以通过ISP,摄像头293,视频编解码器,GPU,显示屏294以及应用处理器等实现拍摄功能。
ISP用于处理摄像头293反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头293中。
摄像头293用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备200可以包括1个或N个摄像头293,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备200在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备200可以支持一种或多种视频编解码器。这样,电子设备200可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备200的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口220可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备200的存储能力。外部存储卡通过外部存储器接口220与处理器210通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器221可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器221可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备200使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器221可以包括高速随机存取存储器,例如双倍速率同步动态随机存储器(double data rate synchronous dynamic random access memory,DDR),还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器210通过运行存储在内部存储器221的指令,和/或存储在设置于处理器中的存储器的指令,执行电子设备200的各种功能应用以及数据处理。
电子设备200可以通过音频模块270,扬声器270A,受话器270B,麦克风270C,耳机接口270D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块270用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转 换为数字音频信号。音频模块270还可以用于对音频信号编码和解码。在一些实施例中,音频模块270可以设置于处理器210中,或将音频模块270的部分功能模块设置于处理器210中。
扬声器270A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备200可以通过扬声器270A收听音乐,或收听免提通话。
受话器270B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备200接听电话或语音信息时,可以通过将受话器270B靠近人耳接听语音。
麦克风270C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风270C发声,将声音信号输入到麦克风270C。电子设备200可以设置至少一个麦克风270C。在另一些实施例中,电子设备200可以设置两个麦克风270C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备200还可以设置三个,四个或更多麦克风270C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口270D用于连接有线耳机。耳机接口270D可以是USB接口230,也可以是3.5毫米的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器280A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器280A可以设置于显示屏294。压力传感器280A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器280A,电极之间的电容改变。电子设备200根据电容的变化确定压力的强度。当有触摸操作作用于显示屏294,电子设备200根据压力传感器280A检测所述触摸操作强度。电子设备200也可以根据压力传感器280A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器280B可以用于确定电子设备200的运动姿态。在一些实施例中,可以通过陀螺仪传感器280B确定电子设备200围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器280B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器280B检测电子设备200抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备200的抖动,实现防抖。陀螺仪传感器280B还可以用于导航,体感游戏场景。
气压传感器280C用于测量气压。在一些实施例中,电子设备200通过气压传感器280C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器280D包括霍尔传感器。电子设备200可以利用磁传感器22280D检测翻盖皮套的开合。在一些实施例中,当电子设备200是翻盖机时,电子设备200可以根据磁传感器280D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器280E可检测电子设备200在各个方向上(一般为三轴)加速度的大小。当电子设备200静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横 竖屏切换,计步器等应用。
距离传感器280F,用于测量距离。电子设备200可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备200可以利用距离传感器280F测距以实现快速对焦。
接近光传感器280G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备200通过发光二极管向外发射红外光。电子设备200使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备200附近有物体。当检测到不充分的反射光时,电子设备200可以确定电子设备200附近没有物体。电子设备200可以利用接近光传感器280G检测用户手持电子设备200贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器280G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器280L用于感知环境光亮度。电子设备200可以根据感知的环境光亮度自适应调节显示屏294亮度。环境光传感器280L也可用于拍照时自动调节白平衡。环境光传感器280L还可以与接近光传感器280G配合,检测电子设备200是否在口袋里,以防误触。
指纹传感器280H用于采集指纹。电子设备200可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器280J用于检测温度。在一些实施例中,电子设备200利用温度传感器280J检测的温度,执行温度处理策略。例如,当温度传感器280J上报的温度超过阈值,电子设备200执行降低位于温度传感器280J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备200对电池242加热,以避免低温导致电子设备200异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备200对电池242的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器280K,也称“触控器件”。触摸传感器280K可以设置于显示屏294,由触摸传感器280K与显示屏294组成触摸屏,也称“触控屏”。触摸传感器280K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏294提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器280K也可以设置于电子设备200的表面,与显示屏294所处的位置不同。
骨传导传感器280M可以获取振动信号。在一些实施例中,骨传导传感器280M可以获取人体声部振动骨块的振动信号。骨传导传感器280M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器280M也可以设置于耳机中,结合成骨传导耳机。音频模块270可以基于所述骨传导传感器280M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器280M获取的血压跳动信号解析心率信息,实现心率检测功能。
在本申请另一些实施例中,电子设备200也可以采用上述实施例中不同的接口连接方式,例如以上多种传感器中的部分或全部传感器连接MCU,通过MCU再连接AP。
按键290包括开机键,音量键等。按键290可以是机械按键。也可以是触摸式按键。电子设备200可以接收按键输入,产生与电子设备200的用户设置以及功能控制有关的键信号输入。
马达291可以产生振动提示。马达291可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反 馈效果。作用于显示屏294不同区域的触摸操作,马达291也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器292可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口295用于连接SIM卡。SIM卡可以通过插入SIM卡接口295,或从SIM卡接口295拔出,实现和电子设备200的接触和分离。电子设备200可以支持2个或N个SIM卡接口,N为大于2的正整数。SIM卡接口295可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口295可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口295也可以兼容不同类型的SIM卡。SIM卡接口295也可以兼容外部存储卡。电子设备200通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备200采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备200中,不能和电子设备200分离。
电子设备200的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的安卓(Android)系统为例,示例性说明电子设备200的软件结构。当然,电子设备200的操作系统也可以是IOS系统等其它系统,本申请实施例对此不作限定。
示例地,图3是本申请实施例结合硬件和软件提供的一种电子设备的形态示意图。如图3所示,电子设备200包括安卓系统、软件和硬件。硬件包括处理器304,例如GPU和CPU等。软件包括一个或多个图形应用程序,例如参见图3,软件包括图形应用程序301A和图形应用程序301B(统称为图形应用程序301),软件还包括渲染引擎模块302和图形应用程序接口(application program interface,API)层303。示例地,软件中的图形应用程序301可以包括游戏应用程序和3D绘图应用程序等。图3中图形应用程序的数量仅用作示例性说明,不作为对本申请实施例提供的电子设备的限定。
请继续参见图3,渲染引擎模块302包括场景管理模块3021、渲染器模块3022、后处理泛光效果模块3023。场景管理模块3021中包括亮度区域类变化判断模块30211,场景管理模块3021具有渲染引擎接口;后处理泛光效果模块3023中包括优化大维度高亮区域泛光算法模块30231。图形API层303包括嵌入式系统的开放式图形库(openGL for embedded systems,OpenGL ES)接口层3031以及VuIkan接口(一个跨平台的绘图应用程序接口)层3032。OpenGL ES是开放式系统中针对手机、掌上电脑(personal digital assistant,PDA)和游戏主机等嵌入式设备而设计的图形库。
如图4所示,图形应用程序通过加载一个或多个物体模型,创建得到3D场景。
图形应用程序301加载一个或多个物体模型,也即是获取该一个或多个物体模型的相关数据,每个物体模型的相关数据包括该物体模型的状态信息。图形应用程序301根据该一个或多个物体模型的状态信息创建得到3D场景(又称渲染场景)。可选地,物体模型的状态信息可以包括位姿信息和表面材质信息。可选地,物体模型的位姿信息包括物体模型的位置,物体模型的姿态,以及物体模型的缩放系数。其中,物体模型的缩放系数为物体模型在每个轴向上的原始长度与在对应轴向上的显示长度的比值。物体模型的表面材质信息包括:物体模型的表面材质的颜色信息以及表面材质的贴图信息。
图形应用程序301创建得到3D场景后,通过渲染引擎接口将该3D场景中的一个或多个物体模型加载到渲染引擎模块302中。
如图5所示,渲染器模块根据图形应用程序创建得到的3D场景渲染得到图像。
渲染器模块3022根据相机参数,获取3D场景中待渲染的一个或多个物体模型,并对待渲染的一个或多个物体模型渲染,得到图像。其中,相机参数包括相机的位置参数、姿态参数、视窗参数和视场角(field of view,FOV)参数。渲染器模块3022渲染得到的图像为不具有泛光效果的图像。
图3中示出的场景管理模块3021及其包括的亮度区域类变化判断模块30211,以及后处理泛光效果模块3023及其包括的优化大维度高亮区域泛光算法模块30231的功能在下面方法实施例中说明。
应理解的是,上述图2和图3示出的软硬件仅是举例,在其他实施例中,可以采用其它类型的软件或硬件。
图6是本申请实施例提供的一种图像泛光处理方法的流程图。可以应用于上述图2或图3所示的电子设备。如图6所示,该方法包括:
步骤601、电子设备对第一图像进行亮度滤波处理。
电子设备对第一图像进行亮度滤波处理,即剔除该第一图像中像素值小于亮度阈值的像素,以保留该第一图像中像素值大于或等于该亮度阈值的像素。可选地,亮度阈值可以是固定值,即每个图像的亮度阈值均相同。或者,亮度阈值可以基于第一图像中像素的像素值确定,则不同图像的亮度阈值可能不同。其中,亮度阈值可以由电子设备基于第一图像中像素的像素值确定,亮度阈值也可以由其它设备基于第一图像中像素的像素值确定后发送至该电子设备,本申请实施例对此不作限定。
可选地,基于第一图像中像素的像素值确定亮度阈值的过程包括:根据像素值的大小将第一图像中像素划分至多个像素值区间中,该多个像素值区间中任意两个像素值区间不存在交集,且该多个像素值区间的并集为像素值的全集。从多个像素值区间中获取目标像素值区间,目标像素值区间的最大值为第一值,目标像素值区间的最小值为第二值,则满足:第一图像中像素值小于第一值的像素的数量之和大于或等于预设数量阈值,且第一图像中像素值小于第二值的像素的数量之和小于该预设数量阈值。将目标像素值区间的最小值(即第二值)、最大值(即第一值)、平均值或者中间值作为第一图像的亮度阈值。当然,第一图像的亮度阈值也可以是目标像素值区间内的其它像素值。其中,预设数量阈值可以为第一图像中像素总数的90%。
示例地,假设上述亮度阈值为目标像素值区间的平均值。第一图像的像素总数为1920×1080。多个像素值区间分别为:[0,7]、[8,15]、[16,23]、[24,31]、...、[240,247]、[248,255],第一图像中划分至该多个像素值区间的像素的数量依次为:1280、3840、1920、4800、…、8640、10368、2880。若像素值区间[0,7]、[8,15]、[16,23]、[24,31]、...、[224,231]中所有像素的数量之和小于第一图像中像素总数的90%,且像素值区间[0,7]、[8,15]、[16,23]、[24,31]、...、[224,231]、[232,239]中所有像素的数量之和大于或等于第一图像中像素总数的90%,则电子设备确定像素值区间[232,239]为目标像素值区间,进一步可以计算得到亮度阈值为:(232+233+...+238+239)/8=235.5。
步骤602、电子设备获取第一图像的第一亮度区域类,第一亮度区域类中包括第一图像的一个或多个亮度区域。
该第一图像可以包括一个或多个亮度区域类,每个亮度区域类中可以包括第一图像中不同的一个或多个亮度区域。亮度区域也可称为高亮区域,指亮度值大于或等于亮度阈值的区域。示例地,第一图像基于第一3D场景渲染得到,则上述一个或多个亮度区域可以是第一3D场景中一个或多个物体模型上的高亮区域。第一亮度区域类可以为第一图像中的任一亮度区域类。
步骤603、当第一图像的第一亮度区域类与经过泛光处理的第二图像的目标亮度区域类相同时,电子设备获取针对第二图像的目标亮度区域类进行高斯模糊处理后得到的第一中间图像。
可选地,当第一图像基于第一3D场景渲染得到,第二图像基于第二3D场景渲染得到,则第一图像的第一亮度区域类与第二图像的目标亮度区域类相同,包括:第一亮度区域类中的亮度区域在第一3D场景中对应的物体模型的状态信息与目标亮度区域类中的亮度区域在第二3D场景中对应的物体模型的状态信息相同,且第一3D场景的相机参数与第二3D场景的相机参数相同。
可选地,图7是本申请实施例提供的一种判断第一图像的第一亮度区域类与第二图像的目标亮度区域类是否相同的方法流程图。如图7所示,该过程包括:
步骤6031、电子设备获取第一3D场景中第一图像的第一亮度区域类对应的所有物体模型的状态信息,以及第二3D场景中第二图像的目标亮度区域类对应的所有物体模型的状态信息。
可选地,物体模型的状态信息可以包括位姿信息和表面材质信息。其中,该位姿信息可以包括物体模型的位置,物体模型的姿态,以及物体模型的缩放系数。该表面材质信息可以包括:物体模型的表面材质的颜色信息以及表面材质的贴图信息。
示例地,假设第一图像的第一亮度区域类包括亮度区域a、亮度区域b以及亮度区域c,其中,亮度区域a为墙体A的高亮区域,亮度区域b为山体B的高亮区域,亮度区域c为天空C的高亮区域。则电子设备获取第一3D场景和第二3D场景中的墙体A、山体B以及天空C的状态信息。
步骤6032、电子设备判断第一3D场景中第一亮度区域类对应的所有物体模型的状态信息与第二3D场景中目标亮度区域类对应的所有物体模型的状态信息是否相同。当该第一亮度区域类对应的所有物体模型的状态信息和该目标亮度区域类对应的所有物体模型的状态信息相同时,执行步骤6033;当该第一亮度区域类对应的所有物体模型的状态信息和该目标亮度区域类对应的所有物体模型的状态信息不完全相同时,执行步骤6036。
其中,亮度区域类对应的所有物体模型包括亮度区域类中的每个亮度区域对应的物体模型。为了便于说明,本申请实施例中将第一亮度区域类中的亮度区域称为第一亮度区域,将目标亮度区域类中的亮度区域称为第二亮度区域。第一3D场景中第一亮度区域类对应的所有物体模型的状态信息与第二3D场景中目标亮度区域类对应的所有物体模型的状态信息相同,指第一亮度区域类中的第一亮度区域与目标亮度区域类中的第二亮度区域一一对应,且每个第一亮度区域对应的物体模型的状态信息与对应的第二亮度区域对应的物体模型的状态信息相同。
示例地,请参考上述步骤6031中的例子,当第一3D场景中的墙体A的状态信息与第二3D场景中的墙体A的状态信息相同,第一3D场景中的山体B的状态信息与第二3D场景中的山体B的状态信息相同,且第一3D场景中的天空C的状态信息与第二3D场景中的天空C的状态信息相同时,电子设备确定第一3D场景中第一亮度区域类对应的所有物体模型的状态信息与第二3D场景中目标亮度区域类对应的所有物体模型的状态信息相同。当第一3D场景中的墙体A的状态信息与第二3D场景中的墙体A的状态信息不同,第一3D场景中的山体B的状态信息与第二3D场景中的山体B的状态信息不同,和/或,第一3D场景中的天空C的状态信息与第二3D场景中的天空C的状态信息不同时,电子设备确定第一3D场景中第一亮度区域类对应的所有物体模型的状态信息与第二3D场景中目标亮度区域类对应的所有物体模型的状态信息不完全相同。
步骤6033、电子设备获取第一3D场景的相机参数以及第二3D场景的相机参数。
可选地,相机参数包括:相机的位姿参数、视窗参数和FOV参数。
步骤6034、电子设备判断第一3D场景的相机参数与第二3D场景的相机参数是否相同。当第一3D场景的相机参数与第二3D场景的相机参数相同时,执行步骤6035;当第一3D场景的相机参数与第二3D场景的相机参数不相同时,执行步骤6036。
步骤6035、电子设备确定该第一亮度区域类与该目标亮度区域类相同。
本申请实施例中,当第一图像的第一亮度区域类与经过泛光处理的第二图像的目标亮度区域类相同时,电子设备可以直接获取针对该目标亮度区域进行高斯模糊处理后得到的第一中间图像,并将该第一中间图像作为针对第一图像的第一亮度区域类进行高斯模糊处理后的图像。此时,开启图像泛光处理功能的电子设备在运行过程中,无需针对该第一图像的第一亮度区域类进行高斯模糊处理,降低了该电子设备的运行负载,且降低了该电子设备的功耗。
步骤6036、电子设备确定该第一亮度区域类与该第二图像的目标亮度区域类不相同。
当第一亮度区域类对应的所有物体模型的状态信息和目标亮度区域类对应的所有物体模型的状态信息不完全相同,和/或,第一3D场景的相机参数和第二3D场景的相机参数不同时,电子设备确定第一亮度区域类相对于目标亮度区域类发生变化,即电子设备确定该第一亮度区域类与该第二图像的目标亮度区域类不相同。
可选地,当第一亮度区域类与目标亮度区域类不相同时,电子设备针对第一图像的第一亮度区域类进行高斯模糊处理,得到对应的中间图像。
可选地,上述步骤6032和步骤6034的执行先后顺序可以调换,即电子设备可以先执行步骤6034,再执行步骤6032;或者,上述步骤6032和步骤6034也可以同时执行,本申请实施例对此不作限定。
本申请实施例中,第一图像的第一亮度区域类与第二图像的目标亮度区域类相同,还包括:第一亮度区域类中亮度区域的数量与目标亮度区域类中亮度区域的数量相同;第一亮度区域类中的亮度区域在第一3D场景中对应的物体模型的类型与目标亮度区域类中的亮度区域在第二3D场景中对应的物体模型的类型相同。其中,物体模型的类型可以包括自然景物(例如山体和天空等)、建筑物、植物和动物等。则上述判断第一图像的第一亮度区域类与经过泛光处理的第二图像的目标亮度区域类是否相同的过程还可以包括:电子设备判断第一亮度区域类中亮度区域的数量与目标亮度区域类中亮度区域的数量是否相同,以及第一亮度区域类中的亮度区域在第一3D场景中对应的物体模型的类型与目标亮度区域类中的亮度区域 在第二3D场景中对应的物体模型的类型是否相同。
可选地,本申请实施例中,电子设备可以先判断第一亮度区域类中亮度区域的数量与目标亮度区域类中亮度区域的数量是否相同,以及第一亮度区域类中的亮度区域在第一3D场景中对应的物体模型的类型与目标亮度区域类中的亮度区域在第二3D场景中对应的物体模型的类型是否相同;在确定第一亮度区域类中亮度区域的数量与目标亮度区域类中亮度区域的数量相同,且第一亮度区域类中的亮度区域在第一3D场景中对应的物体模型的类型与目标亮度区域类中的亮度区域在第二3D场景中对应的物体模型的类型相同后,再执行上述步骤6031至步骤6036,以提高判断效率。
可选地,当第一图像和第二图像均为物理相机拍摄到的图像时,电子设备可以在获取第一图像的第一亮度区域类后,获取第二图像的目标亮度区域类,并基于图像处理技术比较第一亮度区域类和第二亮度区域类,以确定第一亮度区域类与第二亮度区域类是否相同。当然,当第一图像和第二图像均为基于3D场景渲染得到的图像时,也可以采用本方式判断第一亮度区域类与第二亮度区域类是否相同,本申请实施例对此不做限定。
步骤604、电子设备基于第一图像以及第一中间图像,生成第一图像的泛光图像。
可选地,第一图像包括一个或多个亮度区域类。
第一种情况:当第一图像包括一个亮度区域类,即第一图像仅包括第一亮度区域类时,步骤604的实现过程包括:电子设备对第一图像和第一中间图像进行图像融合处理,得到第一图像的泛光图像。
本申请实施例中,当第一图像仅包括第一亮度区域类,且该第一亮度区域类与经过泛光处理后的第二图像的目标亮度区域类相同时,电子设备可以获取针对该目标亮度区域类进行高斯模糊处理后得到的第一中间图像,并基于该第一图像和该第一中间图像生成该第一图像的泛光图像。在保证第一图像的泛光效果的前提下,无需对该第一图像进行高斯模糊处理,显著降低了图像泛光处理过程的复杂度,从而降低了电子设备在开启该泛光处理功能后在运行过程中的负载,降低了电子设备的功耗。
第二种情况:当第一图像包括多个亮度区域类时,电子设备可以针对第一图像中的每个亮度区域类分别执行上述步骤602至步骤604,得到每个亮度区域类对应的中间图像,则步骤604的实现过程包括:电子设备对第一图像以及第一图像的各个亮度区域类对应的中间图像进行图像融合处理,得到第一图像的泛光图像。
可选地,图像中亮度区域类的划分方式有多种,本申请实施例提供了两种亮度区域类的划分方式,并以第一图像包括两个亮度区域类(第一亮度区域类和第二亮度区域类)为例,对不同划分方式下,电子设备对第一图像进行泛光处理的过程进行说明。其中,第一亮度区域类和第二亮度区域类均包括第一图像的一个或多个亮度区域,且第一亮度区域类中的亮度区域与第二亮度区域类中的亮度区域不同。当然,第一图像中也可以包括三个、四个甚至更多个亮度区域类,本申请实施例对此不作限定。
在第一种可实现方式中,亮度区域类基于亮度区域的尺寸划分。第一亮度区域类中的亮度区域的尺寸均大于第二亮度区域类中亮度区域的尺寸。或者,第一亮度区域类中的亮度区域的尺寸均小于第二亮度区域类中亮度区域的尺寸。本申请实施例以第一亮度区域类中的亮度区域的尺寸均大于第二亮度区域类中亮度区域的尺寸为例进行说明。
可选地,电子设备可以存储有预设的尺寸阈值。当亮度区域的尺寸与图像的尺寸的比值 大于尺寸阈值时,电子设备将该亮度区域划分至一个亮度区域类;当亮度区域的尺寸与图像的尺寸的比值小于或等于尺寸阈值时,电子设备将该亮度区域划分至另一个亮度区域类。可选地,尺寸阈值的取值范围可以为图像尺寸的3%~5%,例如,尺寸阈值可以取图像尺寸的3%、4%或5%。
可选地,图8是本申请实施例提供的另一种图像泛光处理方法的流程图。如图8所示,该方法包括:
步骤801、电子设备对第一图像进行亮度滤波处理。
此步骤的解释可参考上述步骤601,本申请实施例在此不做赘述。
步骤802、电子设备获取第一图像的第一亮度区域类。
步骤803、电子设备判断第一图像的第一亮度区域类与经过泛光处理的第二图像的目标亮度区域类是否相同;当该第一亮度区域类与该目标亮度区域类相同时,执行步骤804;当该第一亮度区域类与该目标亮度区域类不同时,执行步骤805。
可选地,电子设备获取第一图像的第一亮度区域类后,确定第二图像的目标亮度区域类,该第二图像包括两个亮度区域类,目标亮度区域类中的亮度区域的尺寸均大于第二图像中的另一亮度区域类中的亮度区域的尺寸。
步骤804、电子设备获取针对第二图像的目标亮度区域类进行高斯模糊处理后得到的第一中间图像,并将该第一中间图像作为第一亮度区域类对应的中间图像。
上述步骤803和步骤804的解释可参考上述步骤603,本申请实施例在此不做赘述。电子设备在执行步骤804之后,执行步骤806。
步骤805、电子设备针对第一图像的第一亮度区域类进行高斯模糊处理,得到第一亮度区域类对应的中间图像。
本申请实施例中,由于第一图像的第一亮度区域类中的亮度区域的尺寸均大于第二亮度区域类中的亮度区域的尺寸,因此,针对第一亮度区域类进行高斯模糊处理可以认为是针对第一图像中大尺寸的亮度区域进行高斯模糊处理。此时,电子设备可以对第一图像的分辨率进行较小缩减倍率的缩减处理,以剔除该第一图像中尺寸较小的亮度区域(也即是细节部分的亮度区域)的像素,仅保留尺寸较大的亮度区域(也即是大维度的亮度区域)的像素,从而方便电子设备针对第一图像的大尺寸的亮度区域(第一亮度区域类)进行高斯模糊处理。
示例地,针对第一图像的第一亮度区域类进行高斯模糊处理的过程可以包括:电子设备先采用第二缩减倍率对第一图像进行缩减像素采样处理,得到针对该第一亮度区域类的第二缩减图像。然后对该第二缩减图像进行高斯模糊处理,得到第一亮度区域类对应的中间图像。其中,第二缩减倍率k2可以满足:k2=2 m,m为整数,且m<-3。本申请实施例中,第二缩减倍率可以指单一的缩减倍率,也可以指多个缩减倍率的集合。例如m可取值-4或-5,则第二缩减倍率k2包括1/16和1/32。
可选地,当第二缩减倍率包括两个缩减倍率时,上述针对第一图像的第一亮度区域类进行高斯模糊处理的过程包括:电子设备采用两个不同的缩减倍率分别对第一图像进行缩减像素采样处理,得到两个针对该第一亮度区域类的缩减图像。然后电子设备对两个缩减图像分别进行高斯模糊处理,得到两个中间图像。本申请实施例中,第二缩减倍率包括的两个缩减倍率可以分别为1/16和1/32。
步骤806、电子设备获取第一图像的第二亮度区域类。
步骤807、电子设备判断经过泛光处理的第二图像中是否存在某个亮度区域类与第一图像的第二亮度区域类相同;当经过泛光处理的第二图像中存在某个亮度区域类与第一图像的第二亮度区域类相同时,执行步骤808;当经过泛光处理的第二图像中的任一亮度区域类与第一图像的第二亮度区域类均不相同,执行步骤809。
可选地,电子设备获取第一图像的第二亮度区域类后,获取第二图像中包含较小尺寸的亮度区域类,并判断第二亮度区域类与该较小尺寸的亮度区域类是否相同。
步骤808、电子设备获取针对第二图像中与第二亮度区域类相同的亮度区域类进行高斯模糊处理后得到的中间图像,并将该中间图像作为第二亮度区域类对应的中间图像。
上述步骤807和步骤808的解释可参考上述步骤603,本申请实施例在此不做赘述。电子设备在执行步骤808之后,执行步骤810。
步骤809、电子设备针对第一图像的第二亮度区域类进行高斯模糊处理,得到第二亮度区域类对应的中间图像。
本申请实施例中,将电子设备针对第一图像的第二亮度区域类进行高斯模糊处理得到的中间图像称为第二中间图像。由于第一图像的第二亮度区域类中的亮度区域的尺寸均小于第一亮度区域类中的亮度区域的尺寸,因此,针对第二亮度区域类进行高斯模糊处理可以认为是针对第一图像中小尺寸的亮度区域进行高斯模糊处理。此时,电子设备可以对第一图像的分辨率进行较大缩减倍率的缩减处理,可以保留该第一图像中的尺寸较小的亮度区域(也即是细节部分的亮度区域)的像素,从而能够实现对第一图像的小尺寸的亮度区域的泛光处理。另外,由于电子设备对第一图像的分辨率进行较大缩减倍率的缩减处理后可以保留第一图像中的尺寸较小的亮度区域,因此可以采用小尺寸的卷积核对缩减后的第一图像进行高斯模糊处理,由于卷积核的尺寸越大,高斯模糊处理的复杂度越高,因此本申请实施例采用小尺寸的卷积核进行高斯模糊处理的复杂度较低,使得图像泛光处理过程的复杂度较低。
示例地,针对第一图像的第二亮度区域类进行高斯模糊处理的过程可以包括:电子设备先采用第一缩减倍率对第一图像进行缩减像素采样处理,得到针对第二亮度区域类的第一缩减图像。然后对该第一缩减图像进行高斯模糊处理,得到第二中间图像。其中,第一缩减倍率k1可以满足:k1=2 m,m为整数,且-3≤n≤0。本申请实施例中,第一缩减倍率可以指单一的缩减倍率,也可以指多个缩减倍率的集合。例如n可取值-2或-3,则第一缩减倍率k1包括1/4和1/8。
可选地,当第一缩减倍率包括两个缩减倍率时,上述针对第一图像的第二亮度区域类进行高斯模糊处理的过程包括:电子设备采用两个不同的缩减倍率分别对第一图像进行缩减像素采样处理,得到两个针对该第二亮度区域类的缩减图像。然后电子设备对两个缩减图像分别进行高斯模糊处理,得到两个中间图像。本申请实施例中,第一缩减倍率包括的两个缩减倍率可以分别为1/4和1/8。
步骤810、电子设备对第一图像、第一亮度区域类对应的中间图像以及第二亮度区域类对应的中间图像进行图像融合处理,得到第一图像的泛光图像。
可选地,电子设备可以不执行上述步骤807至步骤808,也即是,电子设备在执行完上述步骤806后,可以直接针对第一图像的第二亮度区域类进行高斯模糊处理,得到第二中间图像。进而保证第一图像的泛光效果。
在第二种可实现方式中,亮度区域类基于图像的亮度区域在3D场景中对应的物体模型的类别划分。
在游戏或动画等3D场景中,通常包括前景类物体模型(又称角色前景)和背景类物体模型(又称背景物体)。其中,前景类物体模型通常具有可移动、体积较小以及在连续的多帧图像中变化频率较高等特性。例如,前景类物体模型包括人物或者动物等物体模型。由于前景类物体模型的体积通常较小,因而可以认为前景类物体模型上高亮区域的尺寸通常较小,也即是前景类物体模型上高亮区域大多是尺寸较小的细节部分。背景类物体模型通常具有不可移动、体积较大以及在连续的多帧图像中变化频率较低等特性。例如,背景类物体模型包括天空、山峦和建筑物等物体模型。由于背景类物体模型的体积通常较大,因而可以认为背景类物体模型上高亮区域的尺寸通常较大,也即是背景类物体模型上高亮区域大多是尺寸较大的大维度部分。
示例地,图9是本申请实施例提供的一个3D游戏界面(即一帧图像)的示意图。如图9所示,该3D游戏界面对应的3D场景中至少包括:属于前景类物体模型的人物R,以及属于背景类物体模型的房屋墙壁P以及照射灯Q。其中,照射灯Q发出的光线照射在人物R和房屋墙壁P上。照射在人物R上的光线汇聚成一个荧光圈,在该人物R上形成尺寸较小的高亮区域M1(即图中人物R上的阴影区域)。照射在房屋墙壁P上的光线在该房屋墙壁P上形成尺寸较大的亮度区域M2(图中房屋墙壁P上的阴影区域)。
由于背景类物体模型具有在连续的多帧图像中变化频率较低的特性,因而在基于3D场景渲染得到的图像中,背景类物体模型在当前帧图像上的亮度区域较大概率上与经过泛光处理的前一帧图像上的亮度区域相同。因此本申请实施例中,在对当前帧图像进行泛光处理之前,可以先判断当前帧图像对应的3D场景中的背景类物体模型相对于前一帧图像对应的3D场景中的背景类物体模型是否发生变化,若未发生变化,则无需针对当前帧图像中背景类物体模型对应的亮度区域进行高斯模糊处理,而直接获取针对前一帧图像中背景类物体模型对应的亮度区域进行高斯模糊处理得到的中间图像,从而可以减少对图像进行高斯模糊处理的次数,进而降低了图像泛光处理过程的复杂度。
本申请实施例中,电子设备可以将3D场景中所有背景类物体模型在图像中对应的背景类亮度区域组成的集合作为第一亮度区域类,也即是,第一亮度区域类中包括该图像对应的3D场景中所有背景类物体模型对应的背景类亮度区域,该第一亮度区域类也可称为背景类。将3D场景中除背景类物体模型以外的其它物体模型在图像中对应的亮度区域组成的集合作为第二亮度区域类,也即是,第二亮度区域类中包括该图像中除背景类亮度区域以外的其它亮度区域。除背景类物体模型以外的其它物体模型包括前景类物体模型,该第二亮度区域类也可称为前景类,第二亮度区域类中的亮度区域可称为前景类亮度区域。
可选地,图10是本申请实施例提供的又一种图像泛光处理方法的流程图。如图10所示,该方法包括:
步骤1001、电子设备对第一图像进行亮度滤波处理。
此步骤的解释可参考上述步骤601,本申请实施例在此不做赘述。
步骤1002、电子设备针对第一图像的前景类进行高斯模糊处理,得到第一图像的前景类对应的中间图像。
本申请实施例中,将电子设备针对第一图像的前景类进行高斯模糊处理得到的中间图像 称为第二中间图像。前景类中亮度区域的尺寸通常小于背景类中亮度区域的尺寸,电子设备针对第一图像的前景类进行高斯模糊处理的方式可以参考上述步骤809中针对小尺寸亮度区域进行高斯模糊处理的过程,本申请实施例对此不做赘述。
步骤1003、电子设备获取第一3D场景中的背景类物体模型。
第一图像基于第一3D场景渲染得到。可选地,当图像基于3D场景渲染得到时,可以通过判断图像对应的3D场景的背景类物体模型与经过泛光处理后的图像对应的3D场景的背景类物体模型是否相同,来判断该图像的背景类与该经过泛光处理后的图像的背景类是否相同。因此,电子设备获取图像的背景类可以替代为:电子设备获取图像对应的3D场景中的背景类物体模型。
可选地,电子设备获取第一3D场景中的背景类物体模型的过程可以包括:电子设备遍历第一3D场景中的所有物体模型的标签,获取第一3D场景中的所有背景类物体模型,第一亮度区域类包括所有背景类物体模型在第一图像中对应的背景类亮度区域。该标签用于指示物体模型是否为背景类物体模型。本申请实施例中,可选地,3D场景中的每个物体模型均可以携带有标签,该标签用于指示物体模型属于背景类物体模型或者前景类物体模型。该标签可以是人工标注的,也可以是电子设备根据物体模型的类别进行自动划分的。标签可以采用数值、字母或字符串等表示。示例地,当物体模型的标签为“0”时,指示该物体模型属于背景类物体模型;当物体模型的标签为“1”时,指示该物体模型属于前景类物体模型。
步骤1004、电子设备判断第一3D场景中背景类物体模型与第二3D场景中背景类物体模型是否相同;当第一3D场景中背景类物体模型与第二3D场景中背景类物体模型相同时,执行步骤1005;当第一3D场景中背景类物体模型与第二3D场景中背景类物体模型不同时,执行步骤1006。
电子设备判断第一3D场景中背景类物体模型与第二3D场景中背景类物体模型是否相同,也即是判断第一3D场景中背景类物体模型相对于第二3D场景中背景类物体模型是否发生变化。
步骤1005、电子设备获取针对第二图像的背景类进行高斯模糊处理后得到的第一中间图像,并将该第一中间图像作为第一图像的背景类对应的中间图像。
第二图像基于第二3D场景渲染得到。当第一3D场景中背景类物体模型与第二3D场景中背景类物体模型相同时,电子设备确定第一图像对应的第一3D场景中背景类物体模型相对于第二图像对应的第二3D场景中背景类物体模型未发生变化,因此可以认为第一图像的背景类与第二图像的背景类相同。
本申请实施例中,电子设备判断第一3D场景中背景类物体模型与第二3D场景中背景类物体模型是否相同的过程与上述步骤603中,电子设备判断第一图像的第一亮度区域类与第二图像的目标亮度区域类是否相同的过程相同,因此上述步骤1004和步骤1005的解释可参考上述步骤603,本申请实施例在此不做赘述。电子设备在执行步骤1005之后,执行步骤1007。
步骤1006、电子设备针对第一图像的背景类进行高斯模糊处理,得到第一图像的背景类对应的中间图像。
当第一3D场景中背景类物体模型与第二3D场景中背景类物体模型不同时,电子设备确定第一图像对应的第一3D场景中背景类物体模型相对于第二图像对应的第二3D场景中背景类物体模型发生变化,因此可以认为第一图像的背景类与第二图像的背景类不同。本申请实 施例中,由于背景类中亮度区域的尺寸通常大于前景类中亮度区域的尺寸,因此,电子设备针对第一图像的背景类进行高斯模糊处理的方式可以参考上述步骤805中针对大尺寸亮度区域的高斯模糊处理的过程,本申请实施例对此不做赘述。
步骤1007、电子设备对第一图像、第一图像的背景类对应的中间图像以及第一图像的前景类对应的中间图像进行图像融合处理,得到第一图像的泛光图像。
本申请实施例中,上述步骤1003可以在步骤1001之前执行,或者也可以与步骤1001同时执行。例如,电子设备在根据第一3D场景渲染得到第一图像的同时判断第一3D场景中背景类物体模型与第二3D场景中背景类物体模型是否相同。或者,电子设备对第一图像进行亮度滤波处理的同时判断第一3D场景中背景类物体模型与第二3D场景中背景类物体模型是否相同。
可选地,在本申请实施例提供的又一种图像泛光处理方法中,电子设备也可以通过判断第一3D场景中前景类物体模型与第二3D场景中前景类物体模型是否相同,从而使得在第一3D场景中前景类物体模型与第二3D场景中前景类物体模型相同时,电子设备获取针对第二图像的前景类进行高斯模糊处理后得到的第一图像的前景类对应的中间图像。在第一3D场景中前景类物体模型与第二3D场景中前景类物体模型不同时,电子设备执行上述步骤1002针对第一图像的前景类进行高斯模糊处理,得到第一图像的前景类对应的中间图像。这样,在第一3D场景中前景类物体模型与第二3D场景中前景类物体模型相同时,获取针对第二图像的前景类进行高斯模糊处理后得到的第一图像的前景类对应的中间图像,而无需针对第一图像的前景类进行高斯模糊处理,减少对第一图像进行高斯模糊处理的次数,进而降低了图像泛光处理过程的复杂度。
本申请以下实施例对电子设备实现如图10所示的图像泛光处理方法的过程进行示例性说明。示例地,图11是本申请实施例提供的一种电子设备实现图像泛光处理方法的流程示意图。如图11所示,第一3D场景中的物体模型包括物体1、物体2、物体3、物体4、物体5和物体6。其中,物体1、物体2和物体3具有的标签均为背景标签,该背景标签用于指示物体模型属于背景类物体模型。物体4、物体5和物体6具有的标签均为前景标签(也称角色标签),该前景标签用于指示物体模型属于前景类物体模型。
电子设备的场景管理模块中存储有第一3D场景中的所有物体模型。
渲染器模块根据第一3D场景渲染得到第一图像。
优化大维度高亮区泛光算法模块采用1/4的缩减倍率对第一图像进行缩减像素采样处理,得到分辨率为原分辨率(第一图像的分辨率)的1/4倍的缩减图像(简称1/4缩减图像),然后对1/4缩减图像进行高斯模糊处理得到第一图像的前景类对应的一个中间图像(简称1/4高斯模糊结果图像);并采用1/8的缩减倍率对第一图像进行缩减像素采样处理,得到分辨率为原分辨率的1/8倍的缩减图像(简称1/8缩减图像),然后对1/8缩减图像进行高斯模糊处理得到第一图像的前景类对应的另一个中间图像(简称1/8高斯模糊结果图像)。
亮度区域类变化判断模块遍历场景管理模块中物体1至物体6,确定具有背景标签的物体1至物体3,并判断第一3D场景中物体1、物体2和物体3分别与第二3D场景中的背景类物体模型是否相同。当第一3D场景中物体1、物体2和物体3均分别与第二3D场景中的背景类物体模型相同时,优化大维度高亮区泛光算法模块获取针对第二图像进行高斯模糊处理后的中间图像(即图中示出的1/16高斯模糊结果图像和1/32高斯模糊结果图像)。当第一 3D场景中物体1、物体2和物体3与第二3D场景中的背景类物体模型不同时,优化大维度高亮区泛光算法模块针对第一图像的背景类进行高斯模糊处理,得到第一图像的背景类对应的中间图像。
其中,优化大维度高亮区泛光算法模块针对第一图像的背景类进行高斯模糊处理的过程包括:优化大维度高亮区泛光算法模块采用1/16的缩减倍率对第一图像进行缩减像素采样处理,得到分辨率为原分辨率的1/16倍的缩减图像(简称1/16缩减图像),然后对1/16缩减图像进行高斯模糊处理得到第一图像的背景类对应的一个中间图像(简称1/16高斯模糊结果图像);并采用1/32的缩减倍率对第一图像进行缩减像素采样处理,得到分辨率为原分辨率的1/32倍的缩减图像(简称1/32缩减图像),然后对1/32缩减图像进行高斯模糊处理得到第一图像的背景类对应的另一个中间图像(简称1/32高斯模糊结果图像)。
优化大维度高亮区泛光算法模块获取针对第二图像进行高斯模糊处理后的中间图像的过程包括:优化大维度高亮区泛光算法模块直接获取第二图像的背景类对应的两个中间图像:1/16高斯模糊结果图像和1/32高斯模糊结果图像。
渲染引擎模块对第一图像、1/4高斯模糊结果图像、1/8高斯模糊结果图像、1/16高斯模糊结果图像以及1/32高斯模糊结果图像进行图像融合处理,得到第一图像的泛光图像。
请参见表1,假设电子设备为移动终端,表1记录有:开启图像泛光处理功能的电子设备在运行画质较高的游戏应用(例如帧率为60的游戏应用)时,采用相关技术提供的图像泛光处理方法(简称相关技术算法)时CPU和GPU的负载情况以及系统功耗情况,以及采用如图11所示的图像泛光处理方法(简称本申请算法)时CPU和GPU的负载情况以及系统功耗情况。
表1
  相关技术算法 本申请算法
GPU负载增加率 5% 4%
CPU负载增加率 12% 8%
功耗增加(毫安/mA) 103 71
由表1可知,电子设备开启相关算法运行上述游戏应用的过程中,电子设备的GPU负载增加率为5%,CPU负载增加率为12%,而电子设备开启本申请算法运行上述游戏应用的过程中,电子设备的GPU负载增加率为4%,CPU负载增加率为8%。电子设备开启相关算法运行上述游戏应用的过程中,电子设备的功耗增加103mA,而电子设备开启本申请算法运行上述游戏应用的过程中,电子设备的功耗增加71mA。因此,相较于相关技术,电子设备采用图11所示的图像泛光处理方法可以显著降低电子设备的负载以及功耗。
在本申请实施例中,上述图像泛光处理方法中的各个步骤可以由图3所示的电子设备中相同或不同的模块执行。
示例地,渲染引擎模块302可以用于执行上述步骤601、步骤801以及步骤1001。
优化大维度高亮区域泛光算法模块30231可以用于执行上述步骤603、步骤804、步骤805、步骤808、步骤809、步骤1002、步骤1005以及步骤1006。
亮度区域类变化判断模块30211可以用于执行上述步骤602、步骤802、步骤803、步骤806、步骤807、步骤1003、步骤1004以及步骤6031至步骤6036。
后处理泛光效果模块2023可以用于执行步骤604、步骤810以及步骤1007。
渲染引擎模块302还用于将优化大维度高亮区域泛光算法模块30231生成的第一图像的泛光处理图像通过调用OpenGL ES接口层3031或者调用VuIkan接口层3032从而呈现在该终端的显示设备上。
综上所述,本申请实施例提供的图像泛光处理方法,由于电子设备可以在确定第一图像中的第一亮度区域类与经过泛光处理后的第二图像的目标亮度区域类相同时,可以直接获取针对第二图像的目标亮度区域类进行高斯模糊处理后得到中间图像,而无需针对第一图像的第一亮度区域类进行的高斯模糊处理,因此,本申请实施例在保证第一图像的泛光效果的前提下,可以减少对第一图像进行高斯模糊处理的次数,进而降低了图像泛光处理过程的复杂度,从而降低了电子设备在开启泛光处理功能后在运行过程中的负载,减少了电子设备的功耗。
另外,当第一图像包括多个亮度区域类时,电子设备还可以在确定经过泛光处理后的第二图像中存在某个亮度区域类与第一图像的第二亮度区域类相同时,直接获取针对第二图像中与第二亮度区域类相同的亮度区域类进行高斯模糊处理后得到中间图像,而无需针对第一图像的第二亮度区域类进行的高斯模糊处理,因此,本申请实施例在保证第一图像的泛光效果的前提下,可以进一步减少对第一图像进行高斯模糊处理的次数,进而降低了图像泛光处理过程的复杂度,从而降低了电子设备在开启泛光处理功能后在运行过程中的负载,减少了电子设备的功耗。
下述为本申请的装置实施例,可以用于执行本申请的方法实施例。对于本申请装置实施例中未披露的细节,请参照本申请方法实施例。
请参考图12,其示出了本申请实施例提供的一种图像泛光处理装置的框图,该装置1200可以包括:
第一获取模块1201,用于获取第一图像的第一亮度区域类,第一亮度区域类中包括第一图像的一个或多个亮度区域。
第二获取模块1202,用于在确定第一图像的第一亮度区域类与经过泛光处理的第二图像的目标亮度区域类相同后,获取针对第二图像的目标亮度区域类进行高斯模糊处理后得到的第一中间图像。
生成模块1203,用于基于第一图像以及第一中间图像,生成第一图像的泛光图像。
可选地,第一图像基于第一三维场景渲染得到,第二图像基于第二三维场景渲染得到,第一亮度区域类与目标亮度区域类相同,包括:
第一亮度区域类中的亮度区域在第一三维场景中对应的物体模型的状态信息与目标亮度区域类中的亮度区域在第二三维场景中对应的物体模型的状态信息相同,且第一三维场景的相机参数与第二三维场景的相机参数相同。
可选地,如图13所示,该装置1200还可以包括:
第三获取模块1204,用于获取第一三维场景中第一亮度区域类对应的所有物体模型的状态信息,以及第二三维场景中目标亮度区域类对应的所有物体模型的状态信息。
第四获取模块1205,用于在确定第一三维场景中第一亮度区域类对应的所有物体模型的状态信息与第二三维场景中目标亮度区域类对应的所有物体模型的状态信息相同后,获取第一三维场景的相机参数以及第二三维场景的相机参数。
确定模块1206,用于第一三维场景的相机参数与第二三维场景的相机参数相同,确定第一亮度区域类与目标亮度区域类相同。
可选地,物体模型的状态信息包括:物体模型的位姿信息和表面材质信息,相机参数包括:相机的位姿参数、视窗参数和视场角参数。
可选地,第一获取模块1201,用于:遍历第一三维场景中的所有物体模型的标签,标签用于指示物体模型是否为背景类物体模型;获取第一三维场景中的所有背景类物体模型,第一亮度区域类包括所有背景类物体模型在第一图像中对应的背景类亮度区域。
可选地,第一图像还包括第二亮度区域,第二亮度区域类包括第一图像中除背景类亮度区域以外的其它亮度区域,如图14所示,该装置1200还可以包括:
高斯模糊处理模块1207,用于针对第二亮度区域类进行高斯模糊处理,得到第二中间图像。生成模块1203,用于:对第一图像、第一中间图像以及第二中间图像进行图像融合处理,得到第一图像的泛光图像。
可选地,第一图像还包括第二亮度区域类,第二亮度区域类包括第一图像的一个或多个亮度区域,可替代的,上述高斯模糊处理模块1207,用于在确定第一图像的第二亮度区域类与第二图像的任一亮度区域类不同后,针对第二亮度区域类进行高斯模糊处理,得到第二中间图像。生成模块,用于:对第一图像、第一中间图像以及第二中间图像进行图像融合处理,得到第一图像的泛光图像。
可选地,第一亮度区域类中的亮度区域的尺寸均大于第二亮度区域类中的亮度区域的尺寸,高斯模糊处理模块1207,用于:采用第一缩减倍率对第一图像进行缩减像素采样处理,得到第一缩减图像;对第一缩减图像进行高斯模糊处理,得到第二中间图像;
或者,第一亮度区域类中的亮度区域的尺寸均小于第二亮度区域类中的亮度区域的尺寸,高斯模糊处理模块1207,用于:采用第二缩减倍率对第一图像进行缩减像素采样处理,得到第二缩减图像;对第二缩减图像进行高斯模糊处理,得到第二中间图像。其中,第一缩减倍率大于第二缩减倍率。
可选地,第一缩减倍率k1满足:k1=2 n,n为整数,且-3≤n≤0;第二缩减倍率k2满足:k2=2 m,m为整数,且m<-3。
可选地,如图15所示,该装置1200还可以包括:
滤波处理模块1208,用于在基于第一图像以及第一中间图像,生成第一图像的泛光图像之前,对第一图像进行亮度滤波处理。
可选地,第一图像和第二图像为连续的两帧图像,第二图像为第一图像的前一帧图像。
综上所述,本申请实施例提供的图像泛光处理装置,由于可以在确定第一图像中的第一亮度区域类与经过泛光处理后的第二图像的目标亮度区域类相同时,可以直接获取针对第二图像的目标亮度区域类进行高斯模糊处理后得到中间图像,而无需针对第一图像的第一亮度区域类进行的高斯模糊处理,因此,本申请实施例在保证第一图像的泛光效果的前提下,可以减少对第一图像进行高斯模糊处理的次数,进而降低了图像泛光处理过程的复杂度,从而降低了图像泛光处理装置在开启泛光处理功能后在运行过程中的负载,减少了图像泛光处理装置的功耗。
另外,当第一图像包括多个亮度区域类时,图像泛光处理装置还可以在确定经过泛光处理后的第二图像中存在某个亮度区域类与第一图像的第二亮度区域类相同时,直接获取针对 第二图像中与第二亮度区域类相同的亮度区域类进行高斯模糊处理后得到中间图像,而无需针对第一图像的第二亮度区域类进行的高斯模糊处理,因此,本申请实施例在保证第一图像的泛光效果的前提下,可以进一步减少对第一图像进行高斯模糊处理的次数,进而降低了图像泛光处理过程的复杂度,从而降低了图像泛光处理装置在开启泛光处理功能后在运行过程中的负载,减少了图像泛光处理装置的功耗。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的装置和模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
并且,以上装置中的各个模块可以通过软件或软件硬件结合的方式来实现。当至少一个模块是硬件的时候,该硬件可以是逻辑集成电路模块,可具体包括晶体管、逻辑门阵列或算法逻辑电路等。至少一个模块是软件的时候,该软件以计算机程序产品形式存在,并被存储于计算机可读存储介质中。该软件可以被一个处理器执行。因此可替换地,图像泛光处理装置,可以由一个处理器执行软件程序来实现,本实施例对此不限定。
本申请实施例还提供了一种图像泛光处理装置,如图16所示,该装置包括处理器1601和存储器1602;在处理器1601执行存储器1602存储的计算机程序时,图像泛光处理装置执行本申请实施例提供的图像泛光处理方法。可选地,该图像泛光处理装置可以部署在终端中。
可选地,该装置还包括通信接口1603和总线1604。该处理器1601、存储器1602、通信接口1603通过总线1604通信连接。其中,通信接口1603为多个,用于在处理器1601的控制下与其他设备通信;处理器1601能够通过总线1604调用存储器1602中存储的计算机程序。
本申请实施例还提供了一种存储介质,该存储介质可以为非易失性计算机可读存储介质,存储介质内存储有计算机程序,该计算机程序指示处理组件执行本申请实施例提供的任一的图像泛光处理方法。该存储介质可以包括:只读存储器(read-only memory,ROM)或随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可存储程序代码的介质。
本申请实施例还提供了一种包含指令的计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行本申请实施例提供的图像泛光处理方法。该计算机程序产品可以包括一个或多个计算机指令。在计算机上加载和执行该计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。该计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。该计算机指令可以存储在计算机可读存储介质中,或者通过该计算机可读存储介质进行传输。该计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。该可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如,固态硬盘(solid state disk,SSD))等。
本申请实施例还提供了一种芯片,例如CPU芯片,该芯片包括一个或多个物理核、以及存储介质,所述一个或多个物理核在读取所述存储介质中的计算机指令后实现前述图像泛光处理方法。另一些实施例中,该芯片可以用纯硬件或软硬结合的方式实现前述图像泛光处理方法,即所述芯片包括逻辑电路,当所述芯片运行时所述逻辑电路用于实现本申请实施例提供的任意一种图像泛光处理方法,所述逻辑电路可以为可编程逻辑电路。类似的,GPU也可 以如CPU般实现。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
在本申请实施例中,术语“第一”、“第二”和“第三”仅用于描述目的,而不能理解为指示或暗示相对重要性。术语“至少一个”是指一个或多个,术语“多个”指两个或两个以上,除非另有明确的限定。
本申请中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
以上所述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的构思和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (24)

  1. 一种图像泛光处理方法,其特征在于,所述方法包括:
    获取第一图像的第一亮度区域类,所述第一亮度区域类中包括所述第一图像的一个或多个亮度区域;
    在确定所述第一图像的第一亮度区域类与经过泛光处理的第二图像的目标亮度区域类相同后,获取针对所述第二图像的目标亮度区域类进行高斯模糊处理后得到的第一中间图像;
    基于所述第一图像以及所述第一中间图像,生成所述第一图像的泛光图像。
  2. 根据权利要求1所述的方法,其特征在于,所述第一图像基于第一三维场景渲染得到,所述第二图像基于第二三维场景渲染得到,所述第一亮度区域类与所述目标亮度区域类相同,包括:
    所述第一亮度区域类中的亮度区域在所述第一三维场景中对应的物体模型的状态信息与所述目标亮度区域类中的亮度区域在所述第二三维场景中对应的物体模型的状态信息相同,且所述第一三维场景的相机参数与所述第二三维场景的相机参数相同。
  3. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    获取所述第一三维场景中所述第一亮度区域类对应的所有物体模型的状态信息,以及所述第二三维场景中所述目标亮度区域类对应的所有物体模型的状态信息;
    在确定所述第一三维场景中所述第一亮度区域类对应的所有物体模型的状态信息与所述第二三维场景中所述目标亮度区域类对应的所有物体模型的状态信息相同后,获取所述第一三维场景的相机参数以及所述第二三维场景的相机参数;
    在确定所述第一三维场景的相机参数与所述第二三维场景的相机参数相同后,确定所述第一亮度区域类与所述目标亮度区域类相同。
  4. 根据权利要求2或3所述的方法,其特征在于,所述物体模型的状态信息包括:所述物体模型的位姿信息和表面材质信息,所述相机参数包括:相机的位姿参数、视窗参数和视场角参数。
  5. 根据权利要求2至4任一所述的方法,其特征在于,所述获取第一图像的第一亮度区域类,包括:
    遍历所述第一三维场景中的所有物体模型的标签,所述标签用于指示物体模型是否为背景类物体模型;
    获取所述第一三维场景中的所有所述背景类物体模型,所述第一亮度区域类包括所有所述背景类物体模型在所述第一图像中对应的背景类亮度区域。
  6. 根据权利要求5所述的方法,其特征在于,所述第一图像还包括第二亮度区域,所述第二亮度区域类包括所述第一图像中除所述背景类亮度区域以外的其它亮度区域,所述方法还包括:
    针对所述第二亮度区域类进行高斯模糊处理,得到第二中间图像;
    所述基于所述第一图像以及所述第一中间图像,生成所述第一图像的泛光图像,包括:
    对所述第一图像、所述第一中间图像以及所述第二中间图像进行图像融合处理,得到所述第一图像的泛光图像。
  7. 根据权利要求1至5任一所述的方法,其特征在于,所述第一图像还包括第二亮度区域类,所述第二亮度区域类包括所述第一图像的一个或多个亮度区域,所述方法还包括:
    在确定所述第一图像的第二亮度区域类与所述第二图像的任一亮度区域类不同后,针对所述第二亮度区域类进行高斯模糊处理,得到第二中间图像;
    所述基于所述第一图像以及所述第一中间图像,生成所述第一图像的泛光图像,包括:
    对所述第一图像、所述第一中间图像以及所述第二中间图像进行图像融合处理,得到所述第一图像的泛光图像。
  8. 根据权利要求6或7所述的方法,其特征在于,所述第一亮度区域类中的亮度区域的尺寸均大于所述第二亮度区域类中的亮度区域的尺寸,所述针对所述第一图像的第二亮度区域类进行高斯模糊处理,包括:
    采用第一缩减倍率对所述第一图像进行缩减像素采样处理,得到第一缩减图像;
    对所述第一缩减图像进行高斯模糊处理,得到所述第二中间图像;
    或者,所述第一亮度区域类中的亮度区域的尺寸均小于所述第二亮度区域类中的亮度区域的尺寸,所述针对所述第一图像的第二亮度区域类进行高斯模糊处理,包括:
    采用第二缩减倍率对所述第一图像进行缩减像素采样处理,得到第二缩减图像;
    对所述第二缩减图像进行高斯模糊处理,得到所述第二中间图像;
    其中,所述第一缩减倍率大于所述第二缩减倍率。
  9. 根据权利要求8所述的方法,其特征在于,所述第一缩减倍率k1满足:k1=2 n,n为整数,且-3≤n≤0;所述第二缩减倍率k2满足:k2=2 m,m为整数,且m<-3。
  10. 根据权利要求1至9任一所述的方法,其特征在于,在所述基于所述第一图像以及所述第一中间图像,生成所述第一图像的泛光图像之前,所述方法还包括:
    对所述第一图像进行亮度滤波处理。
  11. 根据权利要求1至10任一所述的方法,其特征在于,所述第一图像和所述第二图像为连续的两帧图像,所述第二图像为所述第一图像的前一帧图像。
  12. 一种图像泛光处理装置,其特征在于,所述装置包括:
    第一获取模块,用于获取第一图像的第一亮度区域类,所述第一亮度区域类中包括所述第一图像的一个或多个亮度区域;
    第二获取模块,用于在确定所述第一图像的第一亮度区域类与经过泛光处理的第二图像的目标亮度区域类相同后,获取针对所述第二图像的目标亮度区域类进行高斯模糊处理后得 到的第一中间图像;
    生成模块,用于基于所述第一图像以及所述第一中间图像,生成所述第一图像的泛光图像。
  13. 根据权利要求12所述的装置,其特征在于,所述第一图像基于第一三维场景渲染得到,所述第二图像基于第二三维场景渲染得到,所述第一亮度区域类与所述目标亮度区域类相同,包括:
    所述第一亮度区域类中的亮度区域在所述第一三维场景中对应的物体模型的状态信息与所述目标亮度区域类中的亮度区域在所述第二三维场景中对应的物体模型的状态信息相同,且所述第一三维场景的相机参数与所述第二三维场景的相机参数相同。
  14. 根据权利要求13所述的装置,其特征在于,所述装置还包括:
    第三获取模块,用于获取所述第一三维场景中所述第一亮度区域类对应的所有物体模型的状态信息,以及所述第二三维场景中所述目标亮度区域类对应的所有物体模型的状态信息;
    第四获取模块,用于在确定所述第一三维场景中所述第一亮度区域类对应的所有物体模型的状态信息与所述第二三维场景中所述目标亮度区域类对应的所有物体模型的状态信息相同后,获取所述第一三维场景的相机参数以及所述第二三维场景的相机参数;
    确定模块,用于在确定所述第一三维场景的相机参数与所述第二三维场景的相机参数相同后,确定所述第一亮度区域类与所述目标亮度区域类相同。
  15. 根据权利要求13或14所述的装置,其特征在于,所述物体模型的状态信息包括:所述物体模型的位姿信息和表面材质信息,所述相机参数包括:相机的位姿参数、视窗参数和视场角参数。
  16. 根据权利要求13至15任一所述的装置,其特征在于,所述第一获取模块,用于:
    遍历所述第一三维场景中的所有物体模型的标签,所述标签用于指示物体模型是否为背景类物体模型;
    获取所述第一三维场景中的所有所述背景类物体模型,所述第一亮度区域类包括所有所述背景类物体模型在所述第一图像中对应的背景类亮度区域。
  17. 根据权利要求16所述的装置,其特征在于,所述第一图像还包括第二亮度区域,所述第二亮度区域类包括所述第一图像中除所述背景类亮度区域以外的其它亮度区域,所述装置还包括:
    高斯模糊处理模块,用于针对所述第二亮度区域类进行高斯模糊处理,得到第二中间图像;
    所述生成模块,用于:
    对所述第一图像、所述第一中间图像以及所述第二中间图像进行图像融合处理,得到所述第一图像的泛光图像。
  18. 根据权利要求12至16任一所述的装置,其特征在于,所述第一图像还包括第二亮度区域类,所述第二亮度区域类包括所述第一图像的一个或多个亮度区域,所述装置还包括:
    高斯模糊处理模块,用于在确定所述第一图像的第二亮度区域类与所述第二图像的任一亮度区域类不同后,针对所述第二亮度区域类进行高斯模糊处理,得到第二中间图像;
    所述生成模块,用于:
    对所述第一图像、所述第一中间图像以及所述第二中间图像进行图像融合处理,得到所述第一图像的泛光图像。
  19. 根据权利要求17或18所述的装置,其特征在于,所述第一亮度区域类中的亮度区域的尺寸均大于所述第二亮度区域类中的亮度区域的尺寸,所述高斯模糊处理模块,用于:
    采用第一缩减倍率对所述第一图像进行缩减像素采样处理,得到第一缩减图像;
    对所述第一缩减图像进行高斯模糊处理,得到所述第二中间图像;
    或者,所述第一亮度区域类中的亮度区域的尺寸均小于所述第二亮度区域类中的亮度区域的尺寸,所述高斯模糊处理模块,用于:
    采用第二缩减倍率对所述第一图像进行缩减像素采样处理,得到第二缩减图像;
    对所述第二缩减图像进行高斯模糊处理,得到所述第二中间图像;
    其中,所述第一缩减倍率大于所述第二缩减倍率。
  20. 根据权利要求19所述的装置,其特征在于,所述第一缩减倍率k1满足:k1=2 n,n为整数,且-3≤n≤0;所述第二缩减倍率k2满足:k2=2 m,m为整数,且m<-3。
  21. 根据权利要求12至20任一所述的装置,其特征在于,所述装置还包括:
    滤波处理模块,用于在所述基于所述第一图像以及所述第一中间图像,生成所述第一图像的泛光图像之前,对所述第一图像进行亮度滤波处理。
  22. 根据权利要求12至21任一所述的装置,其特征在于,所述第一图像和所述第二图像为连续的两帧图像,所述第二图像为所述第一图像的前一帧图像。
  23. 一种存储介质,其特征在于,所述存储介质内存储有计算机程序,所述计算机程序指示处理组件执行如权利要求1至11任一所述的图像泛光处理方法。
  24. 一种图像泛光处理装置,其特征在于,包括:处理器和存储器;
    所述存储器,用于存储计算机程序,所述计算机程序包括程序指令;
    所述处理器,用于调用所述计算机程序,实现如权利要求1至11任一所述的图像泛光处理方法。
PCT/CN2020/113088 2019-10-23 2020-09-02 图像泛光处理方法及装置、存储介质 WO2021077911A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20879379.4A EP4036842A4 (en) 2019-10-23 2020-09-02 IMAGE FLOOD PROCESSING METHOD AND APPARATUS AND STORAGE MEDIA
US17/726,674 US20220245778A1 (en) 2019-10-23 2022-04-22 Image bloom processing method and apparatus, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911014260.3 2019-10-23
CN201911014260.3A CN112700377A (zh) 2019-10-23 2019-10-23 图像泛光处理方法及装置、存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/726,674 Continuation US20220245778A1 (en) 2019-10-23 2022-04-22 Image bloom processing method and apparatus, and storage medium

Publications (1)

Publication Number Publication Date
WO2021077911A1 true WO2021077911A1 (zh) 2021-04-29

Family

ID=75505368

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/113088 WO2021077911A1 (zh) 2019-10-23 2020-09-02 图像泛光处理方法及装置、存储介质

Country Status (4)

Country Link
US (1) US20220245778A1 (zh)
EP (1) EP4036842A4 (zh)
CN (1) CN112700377A (zh)
WO (1) WO2021077911A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115309256A (zh) * 2021-05-07 2022-11-08 华为技术有限公司 一种显示方法与电子设备
CN113837990B (zh) * 2021-06-11 2022-09-30 荣耀终端有限公司 一种噪声的监测方法、电子设备、芯片系统及存储介质
CN114693559A (zh) * 2022-04-02 2022-07-01 深圳创维-Rgb电子有限公司 图像处理优化方法、装置、电子设备及可读存储介质
CN116433829B (zh) * 2023-06-09 2023-08-18 北京全路通信信号研究设计院集团有限公司 场景可视化监控方法、装置、设备及存储介质
CN117314795B (zh) * 2023-11-30 2024-02-27 成都玖锦科技有限公司 一种利用背景数据的sar图像增强方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102819852A (zh) * 2012-05-31 2012-12-12 新奥特(北京)视频技术有限公司 一种在图像中生成光晕的方法
US20140168249A1 (en) * 2007-07-30 2014-06-19 Dolby Laboratories Licensing Corporation Enhancing dynamic ranges of images
CN105608209A (zh) * 2015-12-29 2016-05-25 南威软件股份有限公司 一种视频标注方法和视频标注装置
CN106550244A (zh) * 2015-09-16 2017-03-29 广州市动景计算机科技有限公司 视频图像的画质增强方法及装置
CN106997608A (zh) * 2016-01-22 2017-08-01 五八同城信息技术有限公司 一种生成光晕效果图的方法及装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8760537B2 (en) * 2010-07-05 2014-06-24 Apple Inc. Capturing and rendering high dynamic range images
JP2013038504A (ja) * 2011-08-04 2013-02-21 Sony Corp 撮像装置、および画像処理方法、並びにプログラム
CN107786785B (zh) * 2016-08-29 2020-05-08 华为技术有限公司 光照处理方法及装置
US10147166B2 (en) * 2016-09-23 2018-12-04 Apple Inc. Methods and systems for spatially localized image editing
CN108596828B (zh) * 2018-04-18 2022-03-18 网易(杭州)网络有限公司 图像泛光处理方法与装置、电子设备、存储介质
CN109523473A (zh) * 2018-10-16 2019-03-26 网易(杭州)网络有限公司 图像处理方法、装置、存储介质和电子装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140168249A1 (en) * 2007-07-30 2014-06-19 Dolby Laboratories Licensing Corporation Enhancing dynamic ranges of images
CN102819852A (zh) * 2012-05-31 2012-12-12 新奥特(北京)视频技术有限公司 一种在图像中生成光晕的方法
CN106550244A (zh) * 2015-09-16 2017-03-29 广州市动景计算机科技有限公司 视频图像的画质增强方法及装置
CN105608209A (zh) * 2015-12-29 2016-05-25 南威软件股份有限公司 一种视频标注方法和视频标注装置
CN106997608A (zh) * 2016-01-22 2017-08-01 五八同城信息技术有限公司 一种生成光晕效果图的方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4036842A4

Also Published As

Publication number Publication date
US20220245778A1 (en) 2022-08-04
CN112700377A (zh) 2021-04-23
EP4036842A1 (en) 2022-08-03
EP4036842A4 (en) 2022-12-14

Similar Documents

Publication Publication Date Title
US11800221B2 (en) Time-lapse shooting method and device
WO2020192417A1 (zh) 图像渲染方法及装置、电子设备
WO2020168956A1 (zh) 一种拍摄月亮的方法和电子设备
WO2021077911A1 (zh) 图像泛光处理方法及装置、存储介质
WO2021036715A1 (zh) 一种图文融合方法、装置及电子设备
WO2022017261A1 (zh) 图像合成方法和电子设备
WO2021023035A1 (zh) 一种镜头切换方法及装置
WO2021052111A1 (zh) 图像处理方法及电子装置
CN113170037B (zh) 一种拍摄长曝光图像的方法和电子设备
CN113810601B (zh) 终端的图像处理方法、装置和终端设备
CN117063461A (zh) 一种图像处理方法和电子设备
WO2021077878A1 (zh) 图像处理方法、装置及电子设备
WO2021057626A1 (zh) 图像处理方法、装置、设备及计算机存储介质
US20240137659A1 (en) Point light source image detection method and electronic device
CN116257200A (zh) 一种折叠屏的显示方法及相关装置
WO2022022319A1 (zh) 一种图像处理方法、电子设备、图像处理系统及芯片系统
WO2020233593A1 (zh) 一种前景元素的显示方法和电子设备
WO2022033344A1 (zh) 视频防抖方法、终端设备和计算机可读存储介质
CN115631250B (zh) 图像处理方法与电子设备
CN115686182B (zh) 增强现实视频的处理方法与电子设备
CN116896626B (zh) 视频运动模糊程度的检测方法和装置
CN115705663B (zh) 图像处理方法与电子设备
WO2024082713A1 (zh) 一种图像渲染的方法及装置
CN116703741B (zh) 一种图像对比度的生成方法、装置和电子设备
CN115150543B (zh) 拍摄方法、装置、电子设备及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20879379

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020879379

Country of ref document: EP

Effective date: 20220428