CN110458902B - 3D illumination estimation method and electronic equipment - Google Patents

3D illumination estimation method and electronic equipment Download PDF

Info

Publication number
CN110458902B
CN110458902B CN201910586485.XA CN201910586485A CN110458902B CN 110458902 B CN110458902 B CN 110458902B CN 201910586485 A CN201910586485 A CN 201910586485A CN 110458902 B CN110458902 B CN 110458902B
Authority
CN
China
Prior art keywords
scene
illumination detection
information
electronic device
illumination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910586485.XA
Other languages
Chinese (zh)
Other versions
CN110458902A (en
Inventor
王习之
刘昆
李阳
杜成
王强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN110458902A publication Critical patent/CN110458902A/en
Application granted granted Critical
Publication of CN110458902B publication Critical patent/CN110458902B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The embodiment of the application provides a three-dimensional (3D) illumination estimation method and electronic equipment, relates to the technical field of augmented reality, and can estimate an illumination effect closer to a real scene according to at least one spherical harmonic coefficient so as to enable a virtual object and the real scene to be more effectively fused and improve user experience. The method comprises the following steps: acquiring image information of a first scene; the image information of the first scene comprises color information of pixel points in the first scene and depth information of the pixel points in the first scene; establishing a first simulated scene according to the image information of the first scene; the first simulation scene comprises at least two illumination detection balls and a point cloud; estimating the spherical harmonic coefficient of an illumination detection ball according to the information of the point cloud included in the first simulated scene; wherein the information of the point cloud comprises position information and irradiance information of points in the point cloud; spherical harmonic coefficients of the position of the virtual object are estimated from the spherical harmonic coefficients of the illumination detection sphere.

Description

3D illumination estimation method and electronic equipment
The priority of the chinese patent application entitled "3D illumination estimation method and electronic device" which is filed by the national intellectual property office in 2019, 3, 26, and has an application number of 201910234517.X, and the entire contents of which are incorporated herein by reference.
Technical Field
The application relates to the technical field of augmented reality, in particular to a multi-scene 3D illumination estimation method and electronic equipment.
Background
Augmented Reality (AR) technology is a new technology developed on the basis of virtual reality, and is a technology for increasing the perception of a user to the real world through information provided by a computer system, and superimposing virtual objects, scenes or system prompt information generated by the computer to a real scene, thereby realizing 'augmentation' to the reality, and is a new technology for 'seamlessly' integrating real world information and virtual world information. Therefore, how to coordinate the rendering effect of the virtual object with the environment has important significance for the user experience of the AR product. Rendering virtual objects using illumination estimation is an important component of "seamless" AR.
Relevant solutions have been proposed by manufacturers, for example: the software toolkit ARCore for AR applications by Google, and the software toolkit for AR applications by apple. However, the electronic device can only obtain less information through the software, such as: the illumination information estimated according to the information is average illumination information (that is, the illumination information in each direction is the same), so that the illumination distribution of the virtual object rendered according to the information is greatly different from that in the real environment, the augmented reality effect is not real, and the user experience is poor.
Disclosure of Invention
The embodiment of the application provides a 3D illumination estimation method and electronic equipment, which can estimate an illumination effect closer to a real scene according to at least one spherical harmonic coefficient, so that a virtual object and the real scene are more effectively fused, and user experience is improved.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
in a first aspect, an embodiment of the present application provides a 3D illumination estimation method, where the method includes: acquiring image information of a first scene; the image information of the first scene comprises color information of pixel points in the first scene and depth information of the pixel points in the first scene; establishing a first simulated scene according to the image information of the first scene; the first simulation scene comprises at least two illumination detection balls and a point cloud; estimating the spherical harmonic coefficient of an illumination detection ball according to the information of the point cloud included in the first simulated scene; wherein the information of the point cloud comprises position information and irradiance information of points in the point cloud; spherical harmonic coefficients of the position of the virtual object are estimated from the spherical harmonic coefficients of the illumination detection sphere.
In the technical solution provided by the first aspect, the electronic device may establish the first simulated scene according to the image information of the first scene by acquiring the image information of the first scene, estimate a spherical harmonic coefficient of the illumination detection ball according to the information of the point cloud included in the first simulated scene, and estimate a spherical harmonic coefficient of the position of the virtual object according to a spherical harmonic system of the illumination detection ball, where the spherical harmonic coefficient may include more information amount, for example: the direction of the light intensity, etc. Therefore, the electronic equipment can estimate the illumination effect closer to the real scene, so that the virtual object and the real scene are more effectively fused, and the user experience is improved.
With reference to the first aspect, in a first possible implementation manner, for a first illumination detection ball, estimating spherical harmonic coefficients of the first illumination detection ball according to information of a point cloud included in the first simulated scene includes: acquiring first irradiance of the first illumination detection ball according to the point cloud information; estimating spherical harmonic coefficients of the first illumination detection sphere according to the first irradiance of the first illumination detection sphere; the first illumination detection ball is any illumination detection ball included in the first simulated scene. Based on the first possible implementation manner of the first aspect, the electronic device may obtain irradiance of any illumination detection ball according to the point cloud information, and estimate a spherical harmonic coefficient of any illumination detection ball according to the irradiance of any illumination detection ball, so that the electronic device estimates a spherical harmonic coefficient of the position of the virtual object according to the spherical harmonic coefficient of the illumination detection ball.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner, the obtaining a first irradiance of the first illumination detection ball according to the point cloud information includes: and acquiring first irradiance of the first illumination detection ball according to the information of the first effective point, wherein the first effective point is a point in the point cloud within the visible range of the first illumination detection ball. Based on the second possible implementation manner of the first aspect, the electronic device may obtain the first irradiance of the first illumination detection ball according to the information of the first effective point, so that the electronic device estimates the spherical harmonic coefficient of the first illumination detection ball according to the first irradiance of the first illumination detection ball.
With reference to the first aspect and various possible implementation manners of the first aspect, in a third possible implementation manner, the estimating a spherical harmonic coefficient of the position of the virtual object according to the spherical harmonic coefficient of the illumination detection sphere includes: and carrying out weighted summation on the spherical harmonic coefficients of the illumination detection ball to obtain the spherical harmonic coefficients of the position of the virtual object. Based on the third possible implementation manner of the first aspect, the electronic device may perform weighted summation on the spherical harmonic coefficients of the illumination detection ball to obtain the spherical harmonic coefficients of the position of the virtual object, so that the electronic device estimates the illumination effect of the virtual object in the real scene according to the spherical harmonic coefficients of the position of the virtual object.
With reference to the third possible implementation manner of the first aspect, in a fourth possible implementation manner, a distance between the illumination detection ball and the virtual object is smaller than or equal to a first distance. Based on the fourth possible implementation manner of the first aspect, the electronic device may perform weighted summation on the spherical harmonic coefficients of the illumination detection balls with the distance to the virtual object being smaller than or equal to the first distance, so as to obtain the spherical harmonic coefficient of the position of the virtual object.
With reference to the first aspect and various possible implementation manners of the first aspect, in a fifth possible implementation manner, the method further includes: acquiring the position of a virtual object according to user input; alternatively, the position of the virtual object is preset. Based on the fifth possible implementation manner of the first aspect, the electronic device may obtain the position of the virtual object according to a user input, and the electronic device may further preset the position of the virtual object.
In a second aspect, an embodiment of the present application provides a 3D illumination estimation method, including: acquiring image information of the first scene; the image information of the first scene comprises color information of pixel points in the first scene; acquiring a sky map corresponding to the first scene according to the image information of the first scene; wherein, the sky map corresponding to the first scene is used for indicating the illumination distribution of the first scene; and estimating spherical harmonic coefficients of the virtual object according to the information of the sky plot, wherein the information of the sky plot comprises the spherical harmonic coefficients of the sky plot.
In the technical solution provided by the first aspect, the electronic device may obtain the sky plot corresponding to the first scene according to the image information of the first scene by obtaining the image information of the first scene, and estimate the spherical harmonic coefficient of the virtual object according to the information of the sky plot, where the spherical harmonic coefficient may include more information, for example: the direction of the light intensity, etc. Therefore, the electronic equipment can estimate the illumination effect closer to the real scene, so that the virtual object and the real scene are more effectively fused, and the user experience is improved.
With reference to the second aspect, in a first possible implementation manner, the estimating spherical harmonic coefficients of the virtual object according to the information of the sky plot includes: and taking the spherical harmonic coefficient of the sky map as the spherical harmonic coefficient of the virtual object. Based on the first possible implementation manner of the second aspect, the electronic device may use the spherical harmonic coefficients of the sky plot as the spherical harmonic coefficients of the virtual object, so that the electronic device estimates the illumination effect of the virtual object in the real scene according to the spherical harmonic coefficients of the virtual object.
With reference to the second aspect and the first possible implementation manner of the second aspect, in a second possible implementation manner, the method further includes: acquiring the position of a virtual object according to user input; alternatively, the position of the virtual object is preset. Based on the second possible implementation manner of the second aspect, the electronic device may obtain the position of the virtual object according to a user input, or the electronic device may preset the position of the virtual object.
In a third aspect, an embodiment of the present application provides an electronic device, where the electronic device has the function and the method of implementing the first aspect. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
In a fourth aspect, embodiments of the present application provide an electronic device, where the electronic device has a function of implementing the method and the function described in the second aspect. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
In a fifth aspect, an embodiment of the present application provides an electronic device, including: at least one processor and at least one memory, the at least one memory coupled with the at least one processor; the at least one memory is for storing a computer program such that the computer program, when executed by the at least one processor, implements a method of 3D illumination estimation as described in the first aspect and its various possible implementations.
In a sixth aspect, an embodiment of the present application provides an electronic device, including: at least one processor and at least one memory, the at least one memory coupled with the at least one processor; the at least one memory is for storing a computer program such that the computer program, when executed by the at least one processor, implements a 3D illumination estimation method as described in the second aspect and its various possible implementations.
In a seventh aspect, the present application provides a system chip, where the system chip may be applied in an electronic device, and the system chip includes: at least one processor in which the program instructions involved are executed to implement the functionality of the electronic device in the method according to the first aspect and any of its designs. Optionally, the system-on-chip may further include at least one memory storing the related program instructions.
In an eighth aspect, the present application provides a system-on-chip, which can be applied in an electronic device, and the system-on-chip includes: at least one processor in which the program instructions involved are executed to implement the functionality of the electronic device in the method according to the first aspect and any of its designs. Optionally, the system-on-chip may further include at least one memory storing the related program instructions.
In a ninth aspect, embodiments of the present application provide a computer-readable storage medium, such as a computer non-transitory readable storage medium. Having stored thereon a computer program for causing a computer to perform any of the possible methods of the first aspect described above, when the computer program runs on a computer. For example, the computer may be at least one storage node.
In a tenth aspect, embodiments of the present application provide a computer-readable storage medium, such as a computer non-transitory readable storage medium. Having stored thereon a computer program which, when run on a computer, causes the computer to perform any of the possible methods of the second aspect described above. For example, the computer may be at least one storage node.
In an eleventh aspect, embodiments of the present application provide a computer program product, which when run on a computer, causes any one of the methods provided in the first aspect to be performed. For example, the computer may be at least one storage node.
In a twelfth aspect, embodiments of the present application provide a computer program product, which when run on a computer, causes any of the methods provided by the second aspect to be performed. For example, the computer may be at least one storage node.
It is understood that any electronic device or system chip or computer storage medium or computer program product provided above is used for executing the corresponding method provided above, and therefore, the beneficial effects achieved by the method can refer to the beneficial effects in the corresponding method, and are not described herein again.
Drawings
Fig. 1 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2(a) is a schematic diagram of a principle of a 3D illumination estimation method provided in an embodiment of the present application;
fig. 2(b) is a schematic diagram of a principle of a 3D illumination estimation method according to an embodiment of the present application;
fig. 2(c) is a schematic diagram of the principle of the 3D illumination estimation method provided in the embodiment of the present application;
FIG. 3(a) is a first schematic diagram of an example of a display interface provided in an embodiment of the present application;
fig. 3(b) is a schematic diagram of an example display interface provided in the embodiment of the present application;
fig. 3(c) is a schematic diagram of an example display interface provided in the embodiment of the present application;
fig. 3(d) is a schematic diagram of an example display interface provided in the embodiment of the present application;
fig. 4(a) is a first schematic view of a scene type provided in an embodiment of the present application;
fig. 4(b) is a schematic view of a scene type provided in the embodiment of the present application;
fig. 4(c) is a schematic diagram of a scene type provided in the embodiment of the present application;
fig. 5 is a first flowchart of a 3D illumination estimation method according to an embodiment of the present disclosure;
fig. 6(a) is a schematic diagram of an example display interface provided in the embodiment of the present application;
fig. 6(b) is a schematic diagram six of an example display interface provided in the embodiment of the present application;
FIG. 7 is a first schematic view of an illumination detection ball according to an embodiment of the present disclosure;
FIG. 8 is a second schematic view of an illumination detection ball according to an embodiment of the present disclosure;
FIG. 9 is a seventh exemplary illustration of a display interface provided in an embodiment of the present application;
fig. 10 is a flowchart illustrating a second 3D illumination estimation method according to an embodiment of the present application;
FIG. 11 is an illustration of a sky provided by an embodiment of the present application;
fig. 12 is a third schematic flowchart of a 3D illumination estimation method according to an embodiment of the present application;
fig. 13 is a first schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described below with reference to the drawings. The terminology used in the description of the embodiments herein is for the purpose of describing particular embodiments herein only and is not intended to be limiting of the application.
An electronic device related to an embodiment of the present application is described. The electronic device provided by the embodiment of the present application may be any device with Augmented Reality (AR), such as: may be a portable computer (e.g., a cell phone), a laptop, a Personal Computer (PC), a wearable electronic device (e.g., a smart watch), a tablet, an AR device, etc.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device 100 in fig. 1 may specifically include: the mobile terminal includes a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a depth sensor 180A, a pressure sensor 180B, a gyroscope sensor 180C, an air pressure sensor 180D, a magnetic sensor 180E, an acceleration sensor 180F, a distance sensor 180G, a proximity light sensor 180H, a fingerprint sensor 180J, a temperature sensor 180K, a touch sensor 180L, an ambient light sensor 180M, a bone conduction sensor 180N, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. The memory may store instructions for implementing six modular functions: detection instructions, connection instructions, information management instructions, analysis instructions, data transmission instructions, and notification instructions, and are controlled to be executed by the processor 110. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K via an I2C interface, such that the processor 110 and the touch sensor 180K communicate via an I2C bus interface to implement the touch functionality of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a MiniUSB interface, a microsusb interface, a USB type c interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, where N is a positive integer greater than 1, and if the electronic device 100 includes N cameras, one of the N cameras is the main camera.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The depth sensor 180A is used to acquire depth information of the scene. In some embodiments, a depth sensor may be provided at camera 193.
The pressure sensor 180B is used for sensing a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180B may be disposed on the display screen 194. The pressure sensor 180B may be of a variety of types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180B, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180B. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180B. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180C may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180C. The gyro sensor 180C may be used to photograph anti-shake. For example, when the shutter is pressed, the gyro sensor 180C detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180C may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180D is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180D.
The magnetic sensor 180E includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180E. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180E. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180F may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
And a distance sensor 180G for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180G to range for fast focus.
The proximity light sensor 180H may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor 180H to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180H may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The fingerprint sensor 180J is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180K is used to detect temperature. In some embodiments, the electronic device 100 implements a temperature processing strategy using the temperature detected by the temperature sensor 180K. For example, when the temperature reported by the temperature sensor 180K exceeds the threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180K, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180L is also referred to as a "touch device". The touch sensor 180L may be disposed on the display screen 194, and the touch sensor 180L and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180L is used to detect a touch operation applied thereto or thereabout. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180L may be disposed on the surface of the electronic device 100 at a different position than the display screen 194.
The ambient light sensor 180M is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180M may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180M may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The bone conduction sensor 180N may acquire a vibration signal. In some embodiments, the bone conduction sensor 180N may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180N may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180N may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180N, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180N, so as to realize the heart rate detection function.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The specific principle of the 3D illumination estimation method provided by the present application will be described below with reference to fig. 2(a), 2(b), and 2 (c). The embodiment of the application provides a 3D illumination estimation method and an electronic device, which can be applied to the electronic device 100 shown in fig. 1. The electronic device 100 is installed with AR application software, which may be application software applied to multiple fields, for example: the medical field, the military field, the game field, the communication field, the education field, and the like. The AR application software may execute the 3D illumination estimation algorithm provided by the embodiments of the present application, for example: when a user wants to use the AR function, the user may open an AR application on the electronic device 100 by physically pressing a key or clicking an AR application icon on the electronic device 100, the electronic device 100 may determine a scene type of a current real scene (e.g., an indoor scene, an outdoor scene, a human face scene, or other scenes), obtain different image information for different scene types, obtain one or more spherical harmonic coefficients by using different illumination estimation algorithms, and estimate an illumination effect of a virtual object in the current real scene according to the one or more spherical harmonic coefficients. For an indoor scene, the electronic device 100 may also estimate the lighting effect at a specified location in the scene from a plurality of spherical harmonic coefficients.
As shown in fig. 2(a), the user may open an AR application on the electronic device 100 by clicking (e.g., clicking) on an application icon 101 on the electronic device 100.
As shown in fig. 2(b), after entering the AR application, the electronic device 100 may determine a scene type of the current real scene, acquire image information of the current real scene 102 through the camera 193, acquire one or more spherical harmonic coefficients by using an illumination estimation algorithm corresponding to the scene type of the current real scene 102, and estimate the spherical harmonic coefficients of the virtual object according to the one or more spherical harmonic coefficients.
As shown in fig. 2(c), the electronic device 100 may render the virtual object according to its spherical harmonic coefficient, and display the rendered virtual object 103 on the display screen of the electronic device 100.
In summary, when estimating the illumination effect of the virtual object in the real scene, the electronic device 100 may obtain different image information for different scene types, and obtain one or more spherical harmonic coefficients by using different illumination estimation algorithms, where the spherical harmonic coefficients may include more information, for example: the direction of the light intensity, etc. Therefore, according to the scheme, the electronic device 100 can estimate the illumination effect closer to the real scene, so that the virtual object and the real scene are more effectively fused, and the user experience is improved.
The 3D illumination estimation method provided by the embodiments of the present application will be described below with reference to the drawings of the specification.
It is understood that, in the embodiments of the present application, an electronic device may perform some or all of the steps in the embodiments of the present application, and these steps or operations are merely examples, and the embodiments of the present application may also perform other operations or variations of various operations. Further, the various steps may be performed in a different order presented in the embodiments of the application, and not all operations in the embodiments of the application may be performed.
First, after entering the AR application, the electronic device may automatically open the camera application, or open the camera application through a user operation to determine a scene type of a current real scene. The electronic device may invoke a scene recognition function in the camera application to determine a scene type of the current real scene. The scene recognition function may be MasterAI, which is an artificial intelligence photographers in a mobile phone, and the scene recognition function may also be other types of applications, which is not limited in the embodiment of the present application. The current real scene may be a scene observed by a user through a camera of the electronic device. The scene recognition function can recommend a photographing mode suitable for the current real scene according to the results of scene recognition and face detection and according to a predefined rule. The electronic device can determine the scene type of the current real scene according to the photographing mode. For example, if the photographing mode is an indoor mode, the electronic device may determine that the scene type of the current real scene is an indoor scene; if the photographing mode is the outdoor mode, the electronic device can determine that the scene type of the current real scene is the outdoor scene; if the photographing mode is the face mode, the electronic device may determine that the scene type of the current real scene is the face scene.
The indoor scene can include the internal space environment of people or animals living or moving house buildings, vehicles and the like; outdoor scenes may include natural and artificial scenes outside the interior of a building; the face scene may include a scene in which the proportion of the face to the current real scene is greater than or equal to a threshold value (e.g., 30%).
Referring to fig. 3(a), a user may enter an AR application by clicking (e.g., clicking) an AR icon 301 on an operation interface 302 of the electronic device, and after entering the AR application, the electronic device may automatically open a camera application and invoke a scene recognition function in the camera application to determine a scene type of a current real scene.
Referring to fig. 3(b), the user may enter the AR application by clicking (e.g., clicking) an AR icon 301 on an operation interface 302 of the electronic device, and after entering the AR application, the user may open the camera application by clicking a "yes" button in an information prompt box 303, and invoke a scene recognition function in the camera application to determine a scene type of the real scene.
Referring to fig. 3(c), if the user clicks the button "no" in the information prompt box 303, the electronic device does not open the camera application, but enters the operation interface 304 of the AR application.
Referring to fig. 3(d), the user may enter the AR application by clicking (e.g., clicking) an AR icon 301 on an operation interface 302 of the electronic device, and after entering the AR application, the user may open the camera application by clicking an AR function button 305 on an operation interface 304 of the AR application, and invoke a scene recognition function in the camera application to determine a scene type of the current real scene.
The 3D illumination estimation method is described below with the scene types of the current real scene being an indoor scene, an outdoor scene, and a face scene, respectively, as examples.
If the scene type of the real scene determined by the electronic device is an indoor scene (e.g., a scene shown in fig. 4 (a)), the implementation process of the 3D illumination estimation method may refer to the following embodiment corresponding to the method shown in fig. 5.
If the scene type of the real scene determined by the electronic device is an outdoor scene (e.g., a scene shown in fig. 4 (b)), the implementation process of the 3D illumination estimation method may refer to the following embodiment corresponding to the method shown in fig. 10.
If the scene type of the real scene determined by the electronic device is a face scene (e.g., the scene shown in fig. 4 (c)), the implementation process of the 3D illumination estimation method may refer to the following embodiment corresponding to the method shown in fig. 12.
It should be noted that fig. 4(a), fig. 4(b), and fig. 4(c) are only one example of an indoor scene, an outdoor scene, and a face scene, and it should be understood by those skilled in the art that the indoor scene, the outdoor scene, and the face scene may also be scenes including other contents, and the embodiment of the present application is not particularly limited.
As shown in fig. 5, for the 3D illumination estimation method provided in the embodiment of the present application, the 3D illumination estimation method is applicable to the AR technology, and the 3D illumination estimation method includes the following steps:
step 501, the electronic device acquires image information of a first scene.
The first scene may be a current real scene, and if the scene type of the first scene is an indoor scene, the image information of the first scene may include color information of a pixel in the first scene and depth information of the pixel in the first scene.
The color information of the pixel point in the first scene may be used to indicate the color of the pixel point in the first scene, for example: the color information of the pixel point in the first scene may be a red (R), green (G), or blue (B) color value of the pixel point in the first scene. The color information of the pixel point in the first scene may be obtained by a main camera of the electronic device, for example: the main camera of the electronic device can shoot at least one image of a first scene or a video of the first scene, and the electronic device obtains color information of pixel points in the first scene according to the at least one image of the first scene or the video of the first scene. If the main camera of the electronic device shoots two or more images, the electronic device can acquire color information of the pixel points in each image, and the average value of the color information of the pixel points in the two or more images is used as the color information of the pixel points in the first scene. If the main camera of the electronic device shoots 10 frames of video of the first scene, the electronic device can obtain color information of pixel points in each frame of video, and the average value of the color information of the pixel points in the 10 frames of video is used as the color information of the pixel points in the first scene.
The depth information of the pixel point in the first scene may be used to indicate the position of the pixel point in the first scene, for example: the depth information of the pixel points in the first scene may be spatial coordinates of the pixel points in the first scene. The depth information of the pixel points in the first scene may be obtained by a depth sensor of the electronic device, for example: the depth sensor of the electronic device can shoot at least one image of a first scene or a video of the first scene, and the electronic device obtains depth information of a pixel point in the first scene according to the at least one image of the first scene or the video of the first scene. If the depth sensor of the electronic device shoots two or more images, the electronic device can acquire the depth information of the pixel points in each image, and the average value of the depth information of the pixel points in the two or more images is used as the depth information of the pixel points in the first scene. If the depth sensor of the electronic device shoots 10 frames of video of the first scene, the electronic device can acquire depth information of pixel points in each frame of video, and the average value of the depth information of the pixel points in the 10 frames of video is used as the depth information of the pixel points in the first scene.
It should be noted that the at least one image or video of the first scene captured by the main camera of the electronic device and the at least one image or video of the first scene captured by the depth sensor of the electronic device have the same positional reference.
Illustratively, the depth sensor may be a time of flight (ToF) sensor. The electronic device may quickly create the first simulated scene with a smaller number of frames according to the depth information of the pixel points in the first scene obtained by the ToF.
Optionally, when it is determined that the scene type of the first scene is the indoor scene, the electronic device automatically captures at least one image of the first scene or a video of the first scene through a main camera and a depth sensor of the electronic device, and then obtains image information of the first scene according to the at least one image of the first scene or the video of the first scene.
For example, referring to fig. 6(a), a user may enter a camera application by clicking an AR function button 602 on an AR application interface 601, and when it is determined that the scene type of the first scene is an indoor scene, the electronic device may automatically capture a video of the first scene through a main camera and a depth sensor of the electronic device, and then acquire image information of the first scene according to the video of the first scene.
Optionally, when it is determined that the scene type of the first scene is the indoor scene, the electronic device manually captures at least one image of the first scene or a video of the first scene through a user, and then acquires image information of the first scene according to the at least one image of the first scene or the video of the first scene.
For example, referring to fig. 6(b), the user may click the button "ok" in the information prompt box 603, then take a picture through the main camera and the depth sensor of the electronic device by clicking the photographing button 604 in the photographing mode, and then obtain the image information of the first scene according to the taken picture.
Step 502, the electronic device establishes a first simulated scene according to the image information of the first scene.
The first simulated scene may be a three-dimensional virtual scene established according to image information of the first scene. The first simulated scene may include at least two illumination detection spheres and a point cloud.
The illumination detection ball may be a virtual ball in the first simulation scene, and the virtual ball is used for indicating a spherical harmonic coefficient of a position where the illumination detection ball is located.
Alternatively, the illumination detection ball may also be referred to as a light probe (light probe) or named by other names, and the embodiments of the present application are not limited thereto. The spherical harmonic coefficient of the illumination detection ball comprises color information of the illumination detection ball and position information of the illumination detection ball, and the color information of the illumination detection ball is used for indicating the color of the illumination detection ball; the position information of the illumination detection ball is used for indicating the position of the illumination detection ball in the first simulated scene.
It should be noted that the color information of the illumination detection ball included in the spherical harmonic coefficient of the illumination detection ball may include a color value and an illumination direction of the illumination detection ball. It should be understood by those skilled in the art that the color information included in the spherical harmonic coefficients in the embodiment of the present application includes color values and illumination directions, which are not described in detail below.
Optionally, the number and position of the illumination detection balls in the first simulated scene are preset.
The point cloud may be a collection of points in the first simulated scene that indicate the geometry, object properties, and color distribution of objects in the first simulated scene. The object properties may include, among other things, a light source and a non-light source. For example, the point cloud may be divided into a light source point cloud and a non-light source point cloud according to the object attribute, the light source point cloud being a set of points belonging to the light source object and identified by a deep learning algorithm (e.g., Convolutional Neural Networks (CNNs)), and the non-light source point cloud being a set of points belonging to the non-light source object and identified by the deep learning algorithm (e.g., CNNs). The light source object may include a light, a window, or a strong reflection, etc.
Optionally, the point cloud is a set of sampling points obtained by digitizing image information of the first scene. The information of the point cloud may include position information of each point in the point cloud indicating a position of the point in the first simulated scene and irradiance (irradiance) information of the point indicating an irradiance of the point.
It should be noted that the electronic device may obtain the information of the point cloud according to the image information of the first scene.
For example, fig. 7 is a simulated scene, the illumination probe may be a sphere as shown in fig. 7, the point cloud may be a black point as shown in fig. 7, and the point cloud is a portion of the point cloud in the simulated scene as shown in fig. 7.
It should be noted that the first simulated scene may include more or less illumination detection balls than the illumination detection ball shown in fig. 7, and the first simulated scene may include more point clouds than the point clouds shown in fig. 7, which is not limited in this embodiment of the application.
Step 503, the electronic device estimates the spherical harmonic coefficient of the illumination detection ball according to the information of the point cloud included in the first simulated scene.
Optionally, for a first illumination detection ball, the first illumination detection ball is any illumination detection ball included in the first simulated scene, and the spherical harmonic coefficient of the first illumination detection ball is estimated according to the information of the point cloud included in the first simulated scene, which may include step 1 and step 2:
step 1, the electronic equipment obtains a first irradiance of a first illumination detection ball according to the point cloud information.
Optionally, for any point in the point cloud, the electronic device obtains, according to the position information and the irradiance information of the point, a second irradiance of the point on the first illumination detection ball. The electronic device obtains a first irradiance of the first illumination detection ball according to a second irradiance of each point in the point cloud in the first illumination detection ball.
Example one, the electronic device obtaining a first irradiance of the first illumination detection sphere from a second irradiance of each point in the point cloud at the first illumination detection sphere may include: the electronic device weights and sums the second irradiance of each point in the point cloud at the first illumination detection sphere to obtain the first irradiance of the first illumination detection sphere. For example: if the point cloud includes 6 points, the second irradiance and the weighting coefficient of the 6 points at the position of the first illumination detection ball are shown in table 1. The first irradiance is q1 × L1+ q2 × L2+ q3 × L3+ q4 × L4+ q5 × L5+ q6 × L6.
TABLE 1
Serial number Second irradiance Weighting coefficient
1 L1 q1
2 L2 q2
3 L3 q3
4 L4 q4
5 L5 q5
6 L6 q6
Wherein the weighting factor for each point may be determined from the position of each point and the first photo-detection sphere in the first simulated scene. For example, if the linear distance between the point and the first illumination detection ball is larger, the weighting coefficient of the point is smaller; if the linear distance between the point and the first illumination detection ball is smaller, the weighting coefficient of the point is larger; if the included angle between the point and the first illumination detection ball is larger, the weighting coefficient of the point is smaller; the smaller the angle between the point and the first illumination detection ball is, the larger the weighting coefficient of the point is.
Wherein, if the position coordinate of any point in the point cloud in the first simulation scene is (x)1,y1,z1) The position coordinate of the first illumination detection ball in the first simulation scene is (x)2,y2,z2) The linear distance between the point and the first illumination detection ball can be
Figure BDA0002114610980000141
If the angle between the point and the first illumination detection ball is theta, the angle is larger than the first illumination detection ball
Figure BDA0002114610980000142
Example two, the electronic device obtaining a first irradiance of the first illumination detection sphere from a second irradiance of each point in the point cloud at the first illumination detection sphere may include: the electronics weight and sum the second irradiance of the first active point at the first illumination detection sphere to obtain the first irradiance of the first illumination detection sphere. Wherein the first significant point may be a point in the point cloud that is within a visible range of the first illumination detection sphere.
Optionally, the visible range of the first illumination detection ball is preset, or the electronic device determines according to the position of the first illumination detection ball in the first scene and the position of the point in the point cloud in the first scene.
As shown in FIG. 8, any point 801 in the point cloud is within the visible range of the illumination sounding spheres LP802 and LP803, and any point 801 in the point cloud is not within the visible range of the illumination sounding sphere LP 804.
Illustratively, if the point cloud includes 9 points, 6 of the points are within the visible range of the first illumination detection sphere. The second irradiance and the weighting coefficients of the 9 points at the positions of the first illumination detection balls are shown in table 2. The first irradiance is q1 × L1+ q2 × L2+ q3 × L3+ q4 × L4+ q5 × L5+ q6 × L6.
TABLE 2
Figure BDA0002114610980000143
Figure BDA0002114610980000151
It should be noted that, if a certain point in the point cloud is not within the visible range of the first illumination detection sphere, the weighting coefficient of the point relative to the first illumination detection sphere is 0, and the second irradiance of the point at the position of the first illumination detection sphere may be 0 or a very small value, that is, the second irradiance of the point at the position of the first illumination detection sphere may be negligible.
Wherein the weighting factor for each point may be determined from the position of each point and the first photo-detection sphere in the first simulated scene. For example, if the linear distance between the point and the first illumination detection ball is larger, the weighting coefficient of the point is smaller; if the linear distance between the point and the first illumination detection ball is smaller, the weighting coefficient of the point is larger; if the included angle between the point and the first illumination detection ball is larger, the weighting coefficient of the point is smaller; the smaller the angle between the point and the first illumination detection ball is, the larger the weighting coefficient of the point is.
For the description of the linear distance between the point and the first illumination detection ball and the included angle between the point and the first illumination detection ball, reference may be made to the corresponding description in the first example, and details are not repeated here.
Example three, the electronic device obtaining a first irradiance of the first illumination detection sphere from a second irradiance of each point in the point cloud at the first illumination detection sphere may include: the electronics weight and sum the second irradiance of the second active point at the first illumination detection sphere to obtain the first irradiance of the first illumination detection sphere. The second effective point may be a point in the point cloud that is within the visible range of the first illumination detection sphere and the second irradiance of the first illumination detection sphere is greater than or equal to a preset threshold. For example, if the point cloud includes 9 points. The second irradiance and the weighting coefficients of the 9 points at the positions of the first illumination detection balls are shown in table 3. First irradiance q1 × L1+ q2 × L2+ q3 × L3+ q4 × L4+ q5 × L5.
TABLE 3
Figure BDA0002114610980000152
It should be noted that, if the second irradiance of a certain point in the point cloud in the first illumination detection sphere is smaller than the preset threshold, the weighting coefficient of the certain point relative to the first illumination detection sphere may be 0 or a small value.
Wherein the weighting factor for each point may be determined from the position of each point and the first photo-detection sphere in the first simulated scene. For example, if the linear distance between the point and the first illumination detection ball is larger, the weighting coefficient of the point is smaller; if the linear distance between the point and the first illumination detection ball is smaller, the weighting coefficient of the point is larger; if the included angle between the point and the first illumination detection ball is larger, the weighting coefficient of the point is smaller; the smaller the angle between the point and the first illumination detection ball is, the larger the weighting coefficient of the point is.
For the description of the linear distance between the point and the first illumination detection ball and the included angle between the point and the first illumination detection ball, reference may be made to the corresponding description in the first example, and details are not repeated here.
And 2, the electronic equipment estimates the spherical harmonic coefficient of the first illumination detection ball according to the first irradiance of the first illumination detection ball.
Optionally, the spherical harmonic coefficients of the first illumination detection sphere include spherical harmonic coefficients of three color channels (R, G and B).
The spherical harmonic coefficient of the first illumination detection sphere in one color channel may be a 0-2 order 9-dimensional floating point coefficient (also called L2 spatial harmonics). Thus, the spherical harmonic coefficient of the first illumination detection sphere may be 3 x 9 dimensions.
Alternatively, the spherical harmonic coefficient of any one of the color channels of the first illumination detection sphere may be expressed as
Figure BDA0002114610980000161
The spherical harmonic coefficients of the three color channels of the first illumination detection sphere may be expressed as:
Figure BDA0002114610980000162
the following describes an example of the electronic device estimating the spherical harmonic coefficient of the first illumination detection sphere in any color channel of the first illumination detection sphere according to the first irradiance.
If the spherical harmonic coefficient of any color channel of the first illumination detection ball is
Figure BDA0002114610980000163
The electronic device may calculate the elements of the spherical harmonic coefficients of the first illumination detection sphere at the color channel according to the following formula:
Figure BDA0002114610980000164
wherein x isjFor the first light to detect any direction on the ball, L (x)j) Indicating a third irradiance, Y, in either direction on the first illumination detection spherel m(xj) Representing the spherical harmonic basis function in either direction on the first illumination detection sphere.
It is noted that electricityThe sub-device can select N directions on the first illumination detection ball, the N directions can cover the ball body of the first illumination detection ball, N is a positive integer, and j is greater than 0 and less than or equal to N. The electronic equipment can obtain the third irradiance L (x) in any direction on the first illumination detection ball according to the first irradiance of the first illumination detection ballj). The electronic device may obtain Y according to the following formulal m(xj)。
Figure BDA0002114610980000165
Wherein the content of the first and second substances,
Figure BDA0002114610980000166
is xjSphere coordinates, Y, on the sphere surface of the first illumination detection spherel mIs the spherical harmonic function of l and m, m is more than or equal to 0 and less than or equal to l, i is an imaginary unit, Pl mIs accompanied by a legendre polynomial,
Figure BDA0002114610980000167
Pl(x) Is a Legendre polynomial of order l,
Figure BDA0002114610980000168
optionally, the electronic device calculates spherical harmonic coefficients of the first illumination detection ball in the three color channels according to the above formula, that is, the spherical harmonic coefficients of the first illumination detection ball.
Optionally, the electronic device estimates the spherical harmonic coefficients of either of the illumination detection spheres according to steps 5031 and 5032.
And step 504, the electronic equipment estimates the spherical harmonic coefficient of the position of the virtual object according to the spherical harmonic coefficient of the illumination detection ball.
Wherein the virtual object may be a visual object made of electronic data that the user wants to display on the display screen of the electronic device through the AR application. For example, the virtual object may be a picture, a 3D model, or the like.
Optionally, the electronic device obtains the position of the virtual object according to the user input; alternatively, the electronic device presets the location of the virtual object. Wherein the position of the virtual object is used to indicate the spatial coordinates of the virtual object in the first simulated scene.
For example: after acquiring the image information of the first scene, the electronic device may prompt the user to input a position of the virtual object through a prompt box 901 in fig. 9, and the user may input the position of the virtual object by clicking (e.g., clicking) a tea table 902 in the image information of the first scene, and the electronic device may place the rendered virtual object at the position input by the user.
In one example, the electronic device estimating spherical harmonic coefficients of the virtual object from the spherical harmonic coefficients of the illumination detection sphere may include: the electronic equipment carries out weighted summation on the spherical harmonic coefficients of the illumination detection ball to obtain the spherical harmonic coefficients of the virtual object.
For example, if the first simulated scene includes 5 light detecting spheres, and the spherical harmonics and weighting coefficients of the 5 light detecting spheres are shown in table 4, the spherical harmonics of the virtual objects are 0 × y1+0.3 × y2+0.1 × y3+0.2 × y4+0.4 × y5+0 × y 6. Wherein, the 6 weighting coefficients can be determined according to the distances between the 6 illumination detection balls and the virtual object.
TABLE 4
Serial number Spherical harmonic coefficient of each illumination detection ball Weighting coefficient
1 y1 0
2 y2 0.3
3 y3 0.1
4 y4 0.2
5 y5 0.4
6 y6 0
In yet another example, the electronic device estimating spherical harmonic coefficients of the virtual object from the spherical harmonic coefficients of the illumination detection sphere may include: the electronic equipment carries out weighted summation on the spherical harmonic coefficients of the illumination detection ball to obtain the spherical harmonic coefficients of the virtual object. Wherein, the distance between the illumination detection ball and the virtual object is less than or equal to the first distance.
For example, if the first distance is 0.5m, the first simulated scene includes 5 light detection spheres, the distances between the 5 light detection spheres and the virtual objects, the spherical harmonic coefficients of the 5 light detection spheres, and the weighting coefficients are shown in table 5, and then the spherical harmonic coefficients of the virtual objects are 0.3 y2+0.1 y3+0.2 y4+0.4 y 5. Wherein, the 6 weighting coefficients can be determined according to the distances between the 6 illumination detection balls and the virtual object.
TABLE 5
Serial number Spherical harmonic coefficient of each illumination detection ball Weighting coefficient Distance to virtual object
1 y1 0 1m
2 y2 0.3 0.2m
3 y3 0.1 0.4m
4 y4 0.2 0.3m
5 y5 0.4 0.1m
6 y6 0 2m
It should be noted that, in practical applications, the electronic device may estimate the spherical harmonic coefficient of the illumination detection ball having the distance to the virtual object smaller than or equal to the first distance according to steps 5031 and 5032, and then estimate the spherical harmonic coefficient of the virtual object according to the spherical harmonic coefficient of the illumination detection ball having the distance to the virtual object smaller than or equal to the first distance, instead of estimating the spherical harmonic coefficient of the illumination detection ball having the distance to the virtual object greater than the first distance.
Optionally, the electronic device may render the virtual object according to spherical harmonic coefficients of the virtual object.
After the electronic equipment determines that the scene type of the first scene is an indoor scene, image information of the first scene is obtained through a main camera and a depth sensor of the electronic equipment, a first simulated scene comprising at least two illumination detection balls and point clouds is established according to the image information of the first scene, the spherical harmonic coefficient of the illumination detection balls is estimated according to information of the point clouds included in the first simulated scene, and finally the spherical harmonic coefficient of the virtual object is estimated according to the spherical harmonic coefficient of the illumination detection balls.
Fig. 5 above is an explanation of illumination estimation of an indoor scene, but since there are scenes such as outdoors and human faces in practical applications, the method shown in fig. 10 below may be adopted when an outdoor scene is detected.
As shown in fig. 10, for a 3D illumination estimation method provided in an embodiment of the present application, the 3D illumination estimation method includes the following steps:
step 1001, the electronic device obtains image information of a first scene.
The first scene may be a current real scene, and if the scene type of the first scene is an outdoor scene, the image information of the first scene may include color information of a pixel point in the first scene.
The color information of the pixel point in the first scene may be used to indicate the color of the pixel point in the first scene, for example: the color information of the pixel point in the first scene may be an RGB color value of the pixel point in the first scene. The color information of the pixel point in the first scene may be obtained by one or more cameras of the electronic device (e.g., a wide-angle camera, a telephoto camera, etc.), for example: one or more cameras of the electronic device may capture at least one image of a first scene or a video of the first scene, and the electronic device obtains color information of a pixel point in the first scene according to the at least one image of the first scene or the video of the first scene. If one or more cameras of the electronic equipment shoot two or more images, the electronic equipment can acquire color information of pixel points in each image, and the average value of the color information of the pixel points in the two or more images is used as the color information of the pixel points in the first scene. If one or more cameras of the electronic device take 10 frames of video of the first scene, the electronic device may obtain color information of the pixel points in each frame of video, and use an average value of the color information of the pixel points in the 10 frames of video as the color information of the pixel points in the first scene.
For example, the electronic device may automatically capture at least one image of the first scene or a video of the first scene through a camera of the electronic device when it is determined that the scene type of the first scene is the outdoor scene, and then acquire image information of the first scene according to the at least one image of the first scene or the video of the first scene.
For example, the electronic device may manually capture at least one image of the first scene or a video of the first scene by a user when it is determined that the scene type of the first scene is the outdoor scene, and then acquire image information of the first scene according to the at least one image of the first scene or the video of the first scene.
Step 1002, the electronic device obtains a sky plot corresponding to the first scene according to the image information of the first scene.
Wherein, the sky map corresponding to the first scene is used for indicating the illumination distribution of the first scene. For example, in fig. 11, the first row is a sky map corresponding to different light distributions, and the second row is a picture of a virtual object rendered with the sky map of the first row.
Optionally, the electronic device obtains the sky plot corresponding to the first scene by inputting the image information of the first scene into a first Convolutional Neural Network (CNN).
For example, the image information of the first scene may be subjected to a multi-layer function in the first CNN, such as a convolutional layer (volumetric layer), a pooling layer (posing layer), a linear rectifying layer (relu layer), a batch normalization layer (batch normalization layer), and a loss function layer, to obtain a sky plot corresponding to the image information of the first scene.
And step 1003, the electronic equipment estimates the spherical harmonic coefficient of the virtual object according to the information of the sky plot.
The information of the space map may include spherical harmonic coefficients of the space map.
Wherein the virtual object may be a visual object made of electronic data that the user wants to display on the display screen of the electronic device through the AR application. For example, the virtual object may be a picture, a 3D model, or the like.
Optionally, the electronic device obtains the position of the virtual object according to the user input; alternatively, the electronic device presets the location of the virtual object. Wherein the position of the virtual object is used to indicate the spatial coordinates of the virtual object in the first simulated scene.
Optionally, the electronic device obtains a spherical harmonic coefficient of the sky plot according to the information of the sky plot, uses the spherical harmonic coefficient of the sky plot as a spherical harmonic coefficient of the virtual object, and renders the virtual object according to the spherical harmonic coefficient of the virtual object. For example: the electronic equipment obtains the illumination main direction, the compensation color temperature and the illumination intensity of the virtual object according to the spherical harmonic coefficient of the first scene, and renders the virtual object according to the illumination main direction, the compensation color temperature and the illumination intensity.
After the electronic equipment determines that the scene type of the first scene is the outdoor scene, the image information of the first scene is obtained through one or more cameras of the electronic equipment, a sky map corresponding to the first scene is obtained according to the image information of the first scene, and then the spherical harmonic coefficient of the virtual object is estimated according to the information of the sky map.
The above fig. 5 and fig. 10 respectively describe the illumination estimation of the indoor scene and the outdoor scene, but since there are scenes such as human faces in practical applications, the following method shown in fig. 12 may be adopted when a human face scene is detected.
As shown in fig. 12, for a 3D illumination estimation method provided in an embodiment of the present application, the 3D illumination estimation method includes the following steps:
step 1201, the electronic device acquires image information of a first scene.
The first scene may be a current real scene, and if the scene type of the first scene is a face scene, the image information of the first scene may include color information of a pixel point in the first scene.
It should be noted that, if the scene type of the first scene is a face scene, the specific description of the color information of the pixel point in the first scene may refer to the description in step 1001.
For example, the electronic device may automatically capture at least one image of the first scene or a video of the first scene through a camera of the electronic device when it is determined that the scene type of the first scene is a face scene, and then acquire image information of the first scene according to the at least one image of the first scene or the video of the first scene.
For example, the electronic device may manually capture at least one image of the first scene or a video of the first scene by the user when it is determined that the scene type of the first scene is a face scene, and then acquire image information of the first scene according to the at least one image of the first scene or the video of the first scene.
Step 1202, the electronic device estimates spherical harmonic coefficients of the virtual object according to the image information of the first scene.
Optionally, the electronic device obtains the spherical harmonic coefficient of the first scene according to the image information of the first scene, where the spherical harmonic coefficient of the first scene is the spherical harmonic coefficient of the virtual object. The electronic device may render the virtual object according to the spherical harmonic coefficients of the virtual object. For example: the electronic equipment obtains the illumination main direction, the compensation color temperature and the illumination intensity of the virtual object according to the spherical harmonic coefficient of the first scene, and renders the virtual object according to the illumination main direction, the compensation color temperature and the illumination intensity.
Optionally, the electronic device obtains the spherical harmonic coefficient of the first scene by inputting the image information of the first scene into the second CNN.
For example, the spherical harmonic coefficient of the first scene may be obtained after passing through a multi-layer function in the second CNN, such as a convolution layer (convolutional layer), a pooling layer (posing layer), a linear rectification layer (relu layer), a batch normalization layer (batch normalization layer), and a loss function layer.
Optionally, the spherical harmonic coefficient of the first scene is a spherical harmonic coefficient of a face in the first scene.
After the electronic equipment determines that the scene type of the first scene is the face scene, the image information of the first scene is obtained through one or more cameras of the electronic equipment, and the spherical harmonic coefficient of the virtual object is estimated according to the image information of the first scene.
It is understood that the electronic device includes hardware structures and/or software modules for performing the functions in order to realize the functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional modules according to the method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
For example, fig. 13 shows a schematic structural diagram of the electronic device 130 in the case of dividing each functional module by corresponding functions. The electronic device 130 includes: an obtaining module 1301, an establishing module 1302, and an estimating module 1303.
An obtaining module 1301, configured to obtain image information of a first scene; the image information of the first scene comprises color information of pixel points in the first scene and depth information of the pixel points in the first scene.
An establishing module 1302, configured to establish a first simulated scene according to the image information of the first scene; the first simulation scene comprises at least two illumination detection balls and a point cloud.
An estimating module 1303, configured to estimate a spherical harmonic coefficient of the illumination detection ball according to information of the point cloud included in the first simulated scene; wherein the information of the point cloud comprises position information and irradiance information of points in the point cloud.
The estimating module 1303 is further configured to estimate a spherical harmonic coefficient of the position of the virtual object according to the spherical harmonic coefficient of the illumination detection sphere.
Optionally, the estimating module 1303 is specifically configured to obtain a first irradiance of the first illumination detection ball according to the information of the point cloud; the estimating module 1303 is further specifically configured to estimate a spherical harmonic coefficient of the first illumination detection sphere according to the first irradiance of the first illumination detection sphere; the first illumination detection ball is any illumination detection ball included in the first simulated scene.
Optionally, the obtaining module 1301 is specifically configured to obtain a first irradiance of the first illumination detection ball according to information of a first effective point, where the first effective point is a point in the point cloud within a visible range of the first illumination detection ball.
Optionally, the estimating module 1303 is further specifically configured to perform weighted summation on the spherical harmonic coefficients of the illumination detection sphere to obtain the spherical harmonic coefficients of the position of the virtual object.
Optionally, a distance between the illumination detection ball and the virtual object is smaller than or equal to the first distance.
Optionally, the obtaining module 1301 is further configured to obtain a position of the virtual object according to the user input; alternatively, the position of the virtual object is preset.
All relevant contents of the operations related to the method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
In the present embodiment, the electronic apparatus 130 is presented in a form in which the respective functional modules are divided in an integrated manner. A "module" herein may refer to a particular ASIC, a circuit, a processor and memory that execute one or more software or firmware programs, an integrated logic circuit, and/or other device that provides the described functionality. In a simple embodiment, one skilled in the art will recognize that the electronic device 130 may take the form shown in FIG. 1.
For example, the processor 110 in fig. 1 may cause the electronic device 130 to execute the 3D illumination estimation method in the above method embodiment by calling a computer stored in the internal memory 121 to execute the instructions.
Illustratively, the functions/implementation procedures of the obtaining module 1301, the establishing module 1302 and the estimating module 1303 in fig. 13 can be implemented by the processor 110 in fig. 1 calling a computer executing instruction stored in the internal memory 121.
Since the electronic device 130 provided in this embodiment can execute the 3D illumination estimation method, the technical effects obtained by the electronic device can refer to the method embodiments described above, and are not described herein again.
For example, fig. 14 shows a schematic structural diagram of the electronic device 140 in the case of dividing each functional module by corresponding functions. The electronic device 140 includes: an acquisition module 1401, and an estimation module 1402.
An obtaining module 1401, configured to obtain image information of the first scene; the image information of the first scene comprises color information of pixel points in the first scene.
An obtaining module 1401, further configured to obtain a sky map corresponding to the first scene according to the image information of the first scene; wherein, the sky map corresponding to the first scene is used for indicating the illumination distribution of the first scene.
An estimating module 1402, configured to estimate spherical harmonic coefficients of the virtual object according to information of the sky plot, where the information of the sky plot includes the spherical harmonic coefficients of the sky plot.
Optionally, the estimating module 1402 is specifically configured to use the spherical harmonic coefficient of the sky map as the spherical harmonic coefficient of the virtual object.
Optionally, the obtaining module 1401 is further configured to obtain a position of the virtual object according to the user input; alternatively, the position of the virtual object is preset.
All relevant contents of the operations related to the method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
In the present embodiment, the electronic device 140 is presented in a form of dividing each functional module in an integrated manner. A "module" herein may refer to a particular ASIC, a circuit, a processor and memory that execute one or more software or firmware programs, an integrated logic circuit, and/or other device that provides the described functionality. In a simple embodiment, those skilled in the art will appreciate that the electronic device 140 may take the form shown in FIG. 1.
For example, the processor 110 in fig. 1 may cause the electronic device 140 to execute the 3D illumination estimation method in the above method embodiment by calling a computer stored in the internal memory 121 to execute the instructions.
Illustratively, the functions/implementation procedures of the acquisition module 1401 and the estimation module 1402 in fig. 14 may be implemented by the processor 110 in fig. 1 calling a computer executing instruction stored in the internal memory 121.
Since the electronic device 140 provided in this embodiment can execute the 3D illumination estimation method, the technical effects obtained by the method can be obtained by referring to the method embodiments, and are not described herein again.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical functional division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, that is, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (14)

1. A method of three-dimensional 3D illumination estimation, the method comprising:
acquiring image information of a first scene; the image information of the first scene comprises color information of pixel points in the first scene and depth information of the pixel points in the first scene;
establishing a first simulated scene according to the image information of the first scene; wherein the first simulated scene comprises at least two illumination detection balls and a point cloud;
estimating the spherical harmonic coefficient of an illumination detection ball according to the information of the point cloud included in the first simulated scene; wherein the information of the point cloud comprises position information and irradiance information of points in the point cloud;
and estimating the spherical harmonic coefficient of the position of the virtual object according to the spherical harmonic coefficient of the illumination detection ball.
2. The 3D illumination estimation method according to claim 1, wherein for a first illumination detection sphere, estimating spherical harmonic coefficients of the first illumination detection sphere from information of a point cloud comprised by the first simulated scene comprises:
acquiring first irradiance of the first illumination detection ball according to the point cloud information;
estimating spherical harmonic coefficients of the first illumination detection sphere according to a first irradiance of the first illumination detection sphere;
the first illumination detection ball is any illumination detection ball included in the first simulated scene.
3. The 3D illumination estimation method according to claim 2, wherein the obtaining a first irradiance of the first illumination detection sphere from the information of the point cloud comprises: and acquiring first irradiance of the first illumination detection ball according to the information of the first effective point, wherein the first effective point is a point in the point cloud within the visible range of the first illumination detection ball.
4. The 3D illumination estimation method according to any one of claims 1 to 3, wherein the estimating spherical harmonic coefficients of the position of the virtual object from the spherical harmonic coefficients of the illumination detection sphere comprises:
and carrying out weighted summation on the spherical harmonic coefficients of the illumination detection ball to obtain the spherical harmonic coefficients of the position of the virtual object.
5. The 3D illumination estimation method according to claim 4, characterized in that the distance between the illumination detection sphere and the virtual object is smaller than or equal to a first distance.
6. The 3D illumination estimation method according to any of claims 1-3 or 5, characterized in that the method further comprises: acquiring the position of a virtual object according to user input; alternatively, the position of the virtual object is preset.
7. An electronic device, characterized in that the electronic device comprises: the device comprises an acquisition module, an establishment module and an estimation module;
the acquisition module is used for acquiring the image information of the first scene; the image information of the first scene comprises color information of pixel points in the first scene and depth information of the pixel points in the first scene;
the establishing module is used for establishing a first simulation scene according to the image information of the first scene; wherein the first simulated scene comprises at least two illumination detection balls and a point cloud;
the estimation module is used for estimating the spherical harmonic coefficient of the illumination detection ball according to the information of the point cloud included in the first simulated scene; wherein the information of the point cloud comprises position information and irradiance information of points in the point cloud;
the estimation module is further used for estimating the spherical harmonic coefficient of the position of the virtual object according to the spherical harmonic coefficient of the illumination detection ball.
8. The electronic device of claim 7,
the estimation module is specifically used for acquiring the first irradiance of the first illumination detection ball according to the point cloud information;
the estimation module is further specifically configured to estimate a spherical harmonic coefficient of the first illumination detection sphere according to a first irradiance of the first illumination detection sphere;
wherein the first illumination detection ball is any illumination detection ball included in the first simulated scene.
9. The electronic device of claim 8,
the obtaining module is specifically configured to obtain a first irradiance of a first illumination detection ball according to information of a first effective point, where the first effective point is a point in a point cloud within a visible range of the first illumination detection ball.
10. The electronic device of any of claims 7-9,
the estimation module is further specifically configured to perform weighted summation on the spherical harmonic coefficients of the illumination detection sphere to obtain the spherical harmonic coefficients of the position of the virtual object.
11. The electronic device of claim 10, wherein a distance between the illumination detection sphere and the virtual object is less than or equal to a first distance.
12. The electronic device of any of claims 7-9 or 11,
the acquisition module is also used for acquiring the position of the virtual object according to the input of the user; alternatively, the position of the virtual object is preset.
13. Circuitry applied to an electronic device, the circuitry comprising:
at least one processor in which program instructions are executed to perform the method of any of claims 1-6.
14. A computer storage medium having stored thereon program instructions that, when executed, perform the method of any of claims 1-6.
CN201910586485.XA 2019-03-26 2019-07-01 3D illumination estimation method and electronic equipment Active CN110458902B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910234517 2019-03-26
CN201910234517X 2019-03-26

Publications (2)

Publication Number Publication Date
CN110458902A CN110458902A (en) 2019-11-15
CN110458902B true CN110458902B (en) 2022-04-05

Family

ID=68481912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910586485.XA Active CN110458902B (en) 2019-03-26 2019-07-01 3D illumination estimation method and electronic equipment

Country Status (1)

Country Link
CN (1) CN110458902B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110910486B (en) * 2019-11-28 2021-11-19 浙江大学 Indoor scene illumination estimation model, method and device, storage medium and rendering method
CN115039137A (en) * 2020-01-30 2022-09-09 Oppo广东移动通信有限公司 Method for rendering virtual objects based on luminance estimation, method for training a neural network, and related product
CN113298588A (en) * 2020-06-19 2021-08-24 阿里巴巴集团控股有限公司 Method and device for providing object information and electronic equipment
CN114979457B (en) * 2021-02-26 2023-04-07 华为技术有限公司 Image processing method and related device
CN113537194A (en) * 2021-07-15 2021-10-22 Oppo广东移动通信有限公司 Illumination estimation method, illumination estimation device, storage medium, and electronic apparatus
CN114125310B (en) * 2022-01-26 2022-07-05 荣耀终端有限公司 Photographing method, terminal device and cloud server
CN115375827B (en) * 2022-07-21 2023-09-15 荣耀终端有限公司 Illumination estimation method and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268923A (en) * 2014-09-04 2015-01-07 无锡梵天信息技术股份有限公司 Illumination method based on picture level images
CN106133796A (en) * 2014-03-25 2016-11-16 Metaio有限公司 For representing the method and system of virtual objects in the view of true environment
CN108235053A (en) * 2016-12-19 2018-06-29 中国电信股份有限公司 Interactive rendering intent, equipment and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201016016A (en) * 2008-10-07 2010-04-16 Euclid Discoveries Llc Feature-based video compression

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106133796A (en) * 2014-03-25 2016-11-16 Metaio有限公司 For representing the method and system of virtual objects in the view of true environment
CN104268923A (en) * 2014-09-04 2015-01-07 无锡梵天信息技术股份有限公司 Illumination method based on picture level images
CN108235053A (en) * 2016-12-19 2018-06-29 中国电信股份有限公司 Interactive rendering intent, equipment and system

Also Published As

Publication number Publication date
CN110458902A (en) 2019-11-15

Similar Documents

Publication Publication Date Title
CN110458902B (en) 3D illumination estimation method and electronic equipment
CN113810601B (en) Terminal image processing method and device and terminal equipment
CN113038362A (en) Ultra-wideband positioning method and system
WO2021169515A1 (en) Method for data exchange between devices, and related device
CN112087649B (en) Equipment searching method and electronic equipment
CN111147667A (en) Screen-off control method and electronic equipment
CN112700377A (en) Image floodlight processing method and device and storage medium
CN112085647A (en) Face correction method and electronic equipment
CN114257920B (en) Audio playing method and system and electronic equipment
CN114880251A (en) Access method and access device of storage unit and terminal equipment
CN113518189B (en) Shooting method, shooting system, electronic equipment and storage medium
CN114610193A (en) Content sharing method, electronic device, and storage medium
CN113496477A (en) Screen detection method and electronic equipment
CN115701032A (en) Device control method, electronic device, and storage medium
CN115393676A (en) Gesture control optimization method and device, terminal and storage medium
CN113467735A (en) Image adjusting method, electronic device and storage medium
CN114332331A (en) Image processing method and device
CN114661258A (en) Adaptive display method, electronic device, and storage medium
CN113391735A (en) Display form adjusting method and device, electronic equipment and storage medium
CN114466238A (en) Frame demultiplexing method, electronic device and storage medium
CN113364067B (en) Charging precision calibration method and electronic equipment
WO2023030067A1 (en) Remote control method, remote control device and controlled device
WO2023005882A1 (en) Photographing method, photographing parameter training method, electronic device, and storage medium
CN113132532B (en) Ambient light intensity calibration method and device and electronic equipment
CN114363820A (en) Electronic equipment searching method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant