CN115499576A - Light source estimation method, device and system - Google Patents

Light source estimation method, device and system Download PDF

Info

Publication number
CN115499576A
CN115499576A CN202110977283.5A CN202110977283A CN115499576A CN 115499576 A CN115499576 A CN 115499576A CN 202110977283 A CN202110977283 A CN 202110977283A CN 115499576 A CN115499576 A CN 115499576A
Authority
CN
China
Prior art keywords
light source
information
brightness
light
intensity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110977283.5A
Other languages
Chinese (zh)
Inventor
潘纲
陈泽昊
蒋磊
程捷
郑乾
唐华锦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN115499576A publication Critical patent/CN115499576A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The application provides a light source estimation method, a light source estimation device and a light source estimation system, and belongs to the technical field of machine vision. Aiming at the problem that the accuracy of light source information estimated based on a single image is low, the light intensity change information reflected by a target object in a static state is acquired through a dynamic vision sensor or other vision sensors with high time resolution, and the information of a light source is estimated according to the light intensity change information. The accuracy of the obtained light source information can be greatly improved compared to estimating the information of the light source based on a single image.

Description

Light source estimation method, device and system
The present application claims priority of chinese patent application having application number 202110679232.4 and application name "a light source estimation method" filed by chinese patent office at 18/06/2021, the entire contents of which are incorporated herein by reference.
Technical Field
The present application relates to the field of machine vision technologies, and in particular, to a method, an apparatus, and a system for estimating a light source.
Background
In order to insert an application scene such as a virtual object into an image obtained by shooting a real environment, light source information in the real environment needs to be determined and the virtual object needs to be rendered according to the determined light source information in order to make the virtual object fit the real environment better.
At present, the method of analyzing a single image and estimating light source information is adopted, so that the accuracy of the obtained light source information is often low, and the sense of reality of a virtual object inserted into the image is poor.
Disclosure of Invention
The embodiment of the application provides a light source estimation method, a light source estimation device and a light source estimation system, which are used for improving the accuracy of estimated light source information.
In a first aspect, an embodiment of the present application provides a light source estimation method, in which a vision sensor is used to obtain light intensity change information reflected by a target object, and information of a light source is estimated according to the light intensity change information. The target object is in a static state, and the information of the light source may include position information of the light source or intensity information of the light source, or the information of the light source may include position information of the light source and intensity information of the light source. The vision sensor may comprise a dynamic vision sensor or other high time resolution vision sensor.
In the light source estimation method provided by the embodiment of the invention, even if the target object is in a static state, the light intensity change information reflected by the target object can be used for representing the brightness change condition of the surface of the target object in the brightness change process of the light source. For example, at the instant the light source appears, the brightness of the target object surface may continuously change as the brightness of the light source rises. In the light source estimation method provided by the embodiment of the invention, the brightness change condition of the surface of the target object can be acquired through the visual sensor with high time resolution, and the light intensity change information reflected by the target object is obtained. The information of the light source is estimated according to the light intensity change information, and compared with the information of the light source estimated based on a single image acquired after the light source appears, the accuracy of the obtained light source information can be greatly improved.
In an alternative embodiment, after estimating the information of the light source according to the light intensity variation information, a virtual object may be added to the scene image collected for the scene to which the target object belongs according to the information of the light source. The scene image may be acquired after the brightness of the light source is no longer changed, and the scene image may include the target object or may not include the target object. The visual sensor for collecting the scene image and the visual sensor for collecting the light intensity change information of the target object can be the same visual sensor or not. If the visual sensor for collecting the scene image is not the same visual sensor as the visual sensor for collecting the light intensity change information of the target object, the shooting angles of the two visual sensors are the same or similar.
According to the embodiment of the application, the virtual object is added into the scene image according to the obtained information of the light source with higher accuracy, so that the virtual object can be more attached to a real scene, and the generated visual effect is more real.
In an alternative embodiment, after the light intensity variation information reflected by the target object is obtained by the vision sensor, the information of the light source may be estimated according to the light intensity variation information and the brightness variation curve of the light source. Wherein, the brightness change curve of the light source is preset according to the type of the light source. In reality, due to factors such as the material of the light source, the response time, etc., the light source is turned on for a period of time, which can be described as the brightness rise time. The brightness change curve of the light source in the brightness rise time is related to the type of the light source, and the brightness change curves of the light sources are different because the different types of light sources have different light source materials, response times and the like. The brightness variation curves can be pre-established for different types of light sources, for example, for various types of light sources such as incandescent lamps, fluorescent lamps, light emitting diodes, etc., corresponding brightness variation curves are respectively established. The light source information is estimated by referring to a pre-established brightness change curve of the light source, and the light source information can be quickly and accurately obtained.
In an alternative embodiment, the position information of the light source may comprise direction information and distance information of the light source. When the information of the light source is estimated, the direction information of the light source can be estimated according to the light intensity change information; and estimating the distance information or the intensity information of the light source according to the light intensity change information and the brightness change curve of the light source, or estimating the distance information and the intensity information of the light source according to the light intensity change information and the brightness change curve of the light source. The direction information of the light source may or may not be estimated based on the luminance variation curve of the light source.
In an alternative embodiment, the position information of the light source may comprise direction information and distance information of the light source. When estimating the information of the light source, the direction information of the light source can be estimated according to the light intensity change information and the brightness change curve of the light source; and processing the light intensity change information through the neural network model to obtain the distance information and/or the intensity information of the light source. For example, in an embodiment, the light intensity variation information may be processed through a neural network model to obtain distance information of the light source, where the neural network model is obtained by taking the sample luminance data as an input and taking the light source distance corresponding to the sample luminance data as an output for training. In another embodiment, the light intensity change information may be processed through a neural network model to obtain intensity information of the light source, the neural network model is obtained by training with sample luminance data as input and light source intensity corresponding to the sample luminance data as output, wherein the light source intensity is a light intensity parameter in a luminance change curve. In another embodiment, the light intensity change information may be processed by a neural network model to obtain distance information and intensity information of the light source, and the neural network model is obtained by training with the sample luminance data as input and the light source distance and the light source intensity corresponding to the sample luminance data as output.
In a specific embodiment, the direction information of the light source can be estimated according to the light intensity change information and the brightness change curve of the light source, and then the light intensity change information is processed through the neural network model to obtain the distance information and/or the intensity information of the light source; or processing the light intensity change information through a neural network model to obtain distance information and/or intensity information of the light source, and estimating direction information of the light source according to the light intensity change information and a brightness change curve of the light source.
In an alternative embodiment, the position information of the light source may include direction information and distance information of the light source. When estimating the information of the light source, the direction information of the light source can be estimated according to the light intensity change information and the brightness change curve of the light source; and then estimating distance information or intensity information of the light source according to the direction information, the light intensity change information and the brightness change curve of the light source, or estimating the distance information and the intensity information of the light source according to the direction information, the light intensity change information and the brightness change curve of the light source.
By any of the above methods, the information of the light source can be estimated more accurately.
Alternatively, when the distance information and the intensity information of the light source are estimated according to the direction information, the light intensity variation information, and the luminance variation curve of the light source, the initial light source distance and the initial light source intensity input by the user may be received. And determining a brightness change model to be matched according to the direction information of the light source, the initial light source distance and the initial light source intensity. For example, a brightness change curve of the initial light source intensity may be obtained from a preset brightness change curve, and a brightness change model to be matched may be determined according to the direction information of the light source, the brightness change curve of the initial light source intensity, and the initial light source distance. And determining the difference between the brightness change amplitude of the light intensity change information and the brightness change amplitude of the brightness change model to be matched according to the light intensity change information reflected by the target object and the brightness change model to be matched. And updating the light source distance and the light source intensity corresponding to the brightness change model to be matched by a gradient descent method according to the obtained difference to obtain a new brightness change model to be matched. And according to the steps, determining the difference between the brightness change amplitude of the light intensity change information and the brightness change amplitude of the new brightness change model to be matched. And repeatedly executing the iteration processes to gradually reduce the difference obtained by each iteration process until the difference is smaller than a set expected value or the iteration number reaches a preset value. And estimating the distance information and the intensity information of the light source according to the light source distance and the light source intensity corresponding to the brightness change model to be matched in the last iteration process.
Through the iterative process, the distance information and the intensity information of the light source are estimated, the calculation amount can be reduced to a great extent, the calculation resources are saved, and the distance information and the intensity information of the light source can be obtained quickly.
In a second aspect, an embodiment of the present application further provides a light source estimation apparatus, where the light source estimation apparatus includes corresponding functional modules, which are respectively used to implement the steps in the foregoing method, for specific reference, detailed description in the method example is given, and details are not repeated here. The functions may be implemented by hardware, or by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the above-described functions. For example, the light source estimation device includes an information acquisition unit and a light source estimation unit. The information acquisition unit is used for acquiring light intensity change information reflected by a target object through a vision sensor, wherein the target object is in a static state; and the light source estimation unit is used for estimating the information of the light source according to the light intensity change information, wherein the information of the light source comprises the position information of the light source and/or the intensity information of the light source.
In a third aspect, an embodiment of the present application provides a light source estimation system, including a vision sensor and a processor; the vision sensor is used for acquiring light intensity change information reflected by a target object, wherein the target object is in a static state; a processor connected to the vision sensor and configured to perform the first aspect or any one of the methods of the first aspect. Specifically, the processor obtains light intensity variation information reflected by the target object and collected by the vision sensor, and performs the first aspect or any one of the methods of the first aspect on the light intensity variation information.
In a fourth aspect, the present application provides a computer-readable storage medium, in which a computer program or instructions are stored, which, when executed by an electronic device, cause the electronic device to perform the method of the first aspect or any possible implementation manner of the first aspect.
In a fifth aspect, the present application provides a computer program product comprising a computer program or instructions for implementing the method of the first aspect or any possible implementation manner of the first aspect when the computer program or instructions are executed by an electronic device.
For technical effects that can be achieved by any one of the second aspect to the fifth aspect, reference may be made to the description of the advantageous effects in the first aspect, and details are not repeated here.
Drawings
Fig. 1 is a schematic diagram of a hardware structure of an electronic device in an embodiment of the present application;
fig. 2 is a flowchart of an example of a light source estimation method provided in an embodiment of the present application;
fig. 3 is a flowchart of another example of a light source estimation method provided in an embodiment of the present application;
fig. 4 is a schematic diagram of an example of a scene image shot for a real scene according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an example of a neural network model provided by an embodiment of the present application;
fig. 6 is a flowchart of another example of a light source estimation method provided in the embodiment of the present application;
FIG. 7 is a diagram illustrating an example of an image resulting from adding virtual objects to an image of a scene;
FIG. 8 is a schematic diagram of another example of an image resulting from adding virtual objects to an image of a scene;
FIG. 9 is a diagram illustrating an example of adding a virtual object to an image of a scene according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of an example of a light source estimation device provided in an embodiment of the present application;
fig. 11 is a schematic structural diagram of another example of a light source estimation device provided in an embodiment of the present application;
fig. 12 is a schematic structural diagram of an example of a light source estimation system provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the embodiments of the present application will be described in detail below with reference to the accompanying drawings. The terminology used in the description of the embodiments section of the present application is for the purpose of describing particular embodiments of the present application only and is not intended to be limiting of the present application. It should be apparent that the described embodiments are only a few embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Before describing the embodiments provided in the present application, some terms in the present application are generally explained so as to facilitate the understanding of those skilled in the art, and the terms in the present application are not limited thereto.
(1) High temporal resolution vision sensor: the vision sensor is used for acquiring data more than a set number of times in unit time, and can comprise a high-speed camera, a high-speed video camera, an ultra-high-speed video camera, a dynamic vision sensor and the like, wherein the set number of times can be 1000.
Compared with the traditional camera, the working principle of the high-speed camera, the high-speed video camera and the ultra-high-speed video camera is the same as that of the traditional camera, and optical signals are converted into electric signals. When a camera is used for shooting an object, light reflected by the object is collected by a camera lens, the camera lens focuses light reflected by the object on a light receiving surface of an image pickup device, then an optical signal is converted into an electric signal through the image pickup device, and the electric signal is amplified, filtered and adjusted and then recorded as an image on a recording medium such as a video recorder. The high-speed camera and the high-speed video camera can acquire images at the speed of 1000-10000 frames per second, namely the times of acquiring data per second of the high-speed camera and the high-speed video camera can reach more than 1000 times. The times of acquiring data by the ultra-high-speed camera per second can reach more than one million times, even more than one million times.
(2) Dynamic Vision Sensor (DVS): also referred to as event cameras, event driven cameras or event camera sensors, are new types of cameras that have emerged in recent years. Unlike a traditional camera which takes a complete image, the data collected by the event camera is event data, or "event" is taken by the event camera, and the event camera takes the brightness change of one pixel in the real scene as an event. The event camera captures brightness changes in the scene based on an event-driven approach, and when the brightness of the surface of an object in a real scene changes, the event camera generates a spatio-temporal data stream of a series of events.
Compared with the traditional camera, the event camera has the characteristics of high time resolution, large dynamic range and low time delay, and the times of acquiring data per second of the event camera can reach more than ten thousand times.
(3) BRDF (Bidirectional reflection Distribution Function) model: a light propagation model is used for representing the brightness change condition of the surface of a set target object in the brightness rising process of reference light sources with different directions, different distances and different intensities. The reference light sources in different directions are different in direction relative to the target object; the reference light sources of different distances mean that the distances between the reference light sources and the target object are different. For each reference light source of a set direction, a set distance and a set intensity, a light propagation model can be established. The light propagation model can be generated according to a brightness change curve corresponding to a reference light source with set intensity, a set distance, a set direction, a reflection coefficient of the surface of the target object and a normal vector of a plane where each position point of the surface of the target object is located.
(4) Diffuse reflection: refers to a phenomenon in which light projected on a rough surface is reflected in various directions. When a parallel beam of incident light strikes the surface of a rough object, the surface of the rough object reflects the light in all directions. That is, although the incident lights are parallel to each other, the normal directions of the points at the positions on the surface of the rough object are not consistent, so that the reflected lights are randomly reflected in different directions, and the reflection process is called diffuse reflection. Diffuse reflection is relative to specular reflection, and when a parallel incident beam of light touches a smooth object surface, the light rays are regularly reflected, and the reflected light rays are also parallel to each other, and the regular reflection is called specular reflection.
In the embodiment of the application, in order to collect information of reflected light which is emitted to different directions and more accurately determine the position information and the intensity information of the light source, an object with diffuse reflection characteristics can be selected as a target object, namely, an object with an uneven surface can be selected as the target object. Many everyday objects, such as walls, clothes, etc., appear smooth on their surface, but under the magnifying action of a magnifying glass, their surfaces also exhibit an uneven character, and the incident light, which is essentially parallel, is diffusely reflected in different directions by their surfaces. The visual sensor collects light signals of reflected light which is emitted to different directions, and converts the collected light signals into brightness data.
The "plurality" in the embodiments of the present application means two or more, and in view of this, the "plurality" in the embodiments of the present application may also be understood as "at least two". "at least one" is to be understood as meaning one or more, for example one, two or more. For example, including at least one means including one, two, or more, and does not limit which ones are included, for example, including at least one of a, B, and C, then including may be a, B, C, a and B, a and C, B and C, or a and B and C. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" generally indicates that the preceding and succeeding related objects are in an "or" relationship, unless otherwise specified.
Unless stated to the contrary, the embodiments of the present application refer to the ordinal numbers "first", "second", etc., for distinguishing between a plurality of objects, and do not limit the sequence, timing, priority, or importance of the plurality of objects.
When inserting a virtual object into a scene image obtained by shooting a real environment, in order to make the virtual object fit the real environment better, it is usually necessary to determine light source information in the real environment, and render the virtual object according to the determined light source information; or, when different backgrounds are switched for the portrait, for example, when the background behind the portrait in the first photo is switched to the background in the second photo, it is necessary to determine light source information in the real environment corresponding to the second photo, and then highlight the portrait according to the determined light source information.
Currently, light source information is typically determined by analyzing a single image. The light intensity decays with distance according to the law that the light intensity of incident light is inversely proportional to the square of the distance traveled by the light. For the same light source, the farther the light source is from the target object, the weaker the light intensity of the incident light irradiated to the surface of the target object, and the closer the light source is from the target object, the stronger the light intensity of the incident light irradiated to the surface of the target object. Therefore, the light intensity of the incident light on the surface of the target object may be the same for the short-distance weak light source and the long-distance strong light source, wherein the short-distance weak light source refers to a light source with a short distance to the target object but a low intensity, and the long-distance strong light source refers to a light source with a long distance to the target object but a high intensity.
The method for determining the light source information by analyzing a single image often cannot distinguish a short-distance weak light source from a long-distance strong light source, so that the light source information cannot be accurately determined, so that the reality of a virtual object inserted into the image is poor, or a portrait after background switching is not attached to the background.
In order to more accurately determine light source information in a scene image, embodiments of the present application provide a light source estimation method. The light source estimation method can be applied to application scenes such as inserting virtual objects into scene images or switching different backgrounds for human images. For example, when a game background or a game picture is made for a game, a virtual-real combination mode may be adopted, that is, a real scene is photographed to obtain a scene image, and then a virtual object required in the game is inserted into the scene image to obtain a final required game background or game picture. The game background or the game picture can be displayed to the game player or the user through a display screen of an electronic device such as a mobile phone or a computer, and can also be displayed to the game player or the user through an AR (Augmented Reality) technology. When inserting the virtual object into the scene image, the light source estimation method provided by the embodiment of the present application may be adopted first.
In the brightness change process of the light source, the brightness change condition of the surface of the target object is collected through the vision sensor, and the light intensity change information reflected by the target object is obtained. According to the light intensity change information reflected by the target object, light source information in a real scene corresponding to a scene image to which the target object belongs is determined, and then the virtual object is inserted into the real scene according to the light source information, so that the virtual object can be more fit with the real environment, and the generated visual effect is more real. Especially in AR game scenes, a more realistic immersive gaming experience can be given to game players.
The light source estimation method provided by the embodiment of the application can be executed by an electronic device provided with a vision sensor or an electronic device connected with the vision sensor. An electronic device, which may be a device providing video and/or data connectivity for a user, a handheld device with wireless connectivity, or other processing device connected to a wireless modem, such as: mobile phones (or called "cellular" phones), smart phones, which may be portable, pocket, hand-held, wearable devices, tablet computers, personal Computers (PCs), PDAs (Personal Digital assistants), vehicle computers, drones, aerial devices, computers or servers, etc. The computer can be connected with the vision sensor in a wired or wireless mode, receives light intensity change information reflected by the target object and acquired by the vision sensor, and processes the light intensity change information.
Fig. 1 schematically illustrates an alternative hardware structure of an electronic device 100 to which the embodiment of the present application is applicable.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charge management module 140, a power management module 141, a battery 142, a communication module 150, a sensor module 160, a dynamic vision sensor 170, a camera 180, and a display screen 190. The camera 180 may be a camera of a conventional camera, and the dynamic vision sensor 170 and the camera 180 are both vision sensors. In some embodiments, the dynamic vision sensor 170 may be implemented as a sensor in the sensor module 160, and in other embodiments, the dynamic vision sensor 170 may be independent of the sensor module 160. Similarly, in some embodiments, the camera 180 may act as a sensor in the sensor module 160, and in other embodiments, the camera 180 may be independent of the sensor module 160.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. Wherein, the different processing units may be independent devices or may be integrated in one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bidirectional synchronous serial bus including a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, the processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor, the charger, the flash, the camera 180, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the camera 180 through an I2C interface, so that the processor 110 and the camera 180 communicate through an I2C bus interface, thereby implementing a shooting function of the electronic device 100.
The MIPI interface may be used to connect the processor 110 with peripheral devices such as the display screen 190, the camera 180, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 180 communicate over a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 190 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 180, the display screen 190, the communication module 150, the sensor module 160, and the like. The GPIO interface may also be configured as an I2C interface, MIPI interface, or the like.
The USB interface is an interface which accords with the USB standard specification, and specifically can be a Mini USB interface, a Micro USB interface, a USB Type C interface and the like. The USB interface may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. The method can also be used for connecting earphones and other electronic devices such as AR devices.
It should be understood that the interface connection relationship between the modules illustrated in the embodiments of the present application is only an illustration, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive a charging input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives an input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display 190, the camera 180, the communication module 150, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may also be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by an antenna, the communication module 150, a modem processor, a baseband processor, and the like. The communication module 150 may provide a solution for mobile communication or wireless communication applied to the electronic device 100. In some embodiments, at least some of the functional modules of the communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional blocks of the communication module 150 may be provided in the same device as at least some of the blocks of the processor 110.
The sensor module 160 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like. Illustratively, a temperature sensor is used to detect temperature. In some embodiments, the electronic device 100 implements a temperature processing strategy using the temperature detected by the temperature sensor. For example, when the temperature reported by the temperature sensor exceeds the threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor, so as to reduce power consumption and implement thermal protection. In other embodiments, electronic device 100 heats the battery when the temperature is below another threshold to avoid an abnormal shutdown of electronic device 100 due to low temperatures. In other embodiments, electronic device 100 performs a boost on the output voltage of the battery when the temperature is below a further threshold to avoid abnormal shutdown due to low temperature.
Touch sensors, also known as "touch devices". The touch sensor may be disposed on the display screen 190, and the touch sensor and the display screen 190 form a touch screen, which is also called a "touch screen". The touch sensor is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to the touch operation may be provided through the display screen 190. In other embodiments, the touch sensor may be disposed on a surface of the electronic device 100 at a different location than the display screen 190.
The dynamic vision sensor 170 may capture brightness changes in the scene, and when the brightness of the surface of an object in the real scene changes, the dynamic vision sensor 170 generates a spatiotemporal data stream of a series of events. The dynamic vision sensor 170 may also capture dynamic changes in the scene, generating a spatiotemporal data stream. The electronic device 100 processes the spatiotemporal data stream output by the dynamic vision sensor 170 through the processor 110, so as to realize tracking of a moving object, gesture recognition, and the like. For example, when the user unlocks the electronic device through the setup gesture, the dynamic vision sensor 170 may be used to capture a gesture change of the user, output event data, and the processor 110 may be used to determine the gesture of the user based on the event data output by the dynamic vision sensor 170, and display a desktop of the electronic device if the gesture used by the user coincides with the setup gesture for unlocking the electronic device.
The dynamic vision sensor 170 is a neuromorphic device that mimics the mechanisms of the human retina. Each pixel in the dynamic vision sensor 170 monitors the relative change in light intensity of a particular region. If the change exceeds a predefined threshold, a signal is emitted by the pixel, or an event is declared. Unlike conventional vision sensors that record light intensity values, the dynamic vision sensor 170 only records changes in light intensity, Δ L (X) for light intensity changes k ,t k ) Less than the threshold C does not produce a signal. Each event having a time stamp t k (event generation time), address X k (location of corresponding pixel in sensor) and polarity p k (the change in light perceived by a pixel, typically comprising from dark to light or from light to dark, can be represented by 1 and-1, respectively). The principle of the event can be described as:
Figure RE-GDA0003375273970000091
ΔL(X k ,t k )=p k C
p k ∈{+1,-1}
wherein, Δ t k For the time interval between two adjacent data acquisitions of the dynamic vision sensor 170, let t k Is the current time, then t k -Δt k Is the last time. L (X) k ,t k ) Indicates that the current time position is X k Light intensity value of the pixel of (1), L (X) k ,t k -Δt k ) Indicates that the last time position is X k The light intensity value of the pixel. Δ L (X) k ,t k ) Indicates that the current time position is X k A change in the light intensity value of the pixel. If the change in the light intensity value of the pixel is from dark to light and the change threshold is greater than the threshold C, Δ L (X) k ,t k ) = C; if the change in the light intensity value of the pixel is from dark to light and the change threshold is greater than the threshold C, then Δ L (X) k ,t k )=-C。
Illustratively, the dynamic vision sensor 170 may include a plurality of light sensors and an event generator coupled to the light sensors for sensing dynamic changes in brightness in the scene. The plurality of light sensors are arranged in a matrix of rows and columns, and each light sensor is associated with a row value and a column value. Taking one of the photo sensors as an example, the photo sensor includes a photodiode connected in series with a resistor between a source voltage and a ground voltage. The voltage across the photodiode is proportional to the intensity of light (i.e., brightness) incident on the photosensor.
The light sensor includes a first capacitor in parallel with a photodiode. Thus, the voltage on the first capacitor is the same as the voltage on the photodiode, proportional to the intensity of the light detected by the light sensor. The light sensor also includes a switch coupled between the first capacitor and the second capacitor. The second capacitor is coupled between the switch and a ground voltage. Thus, when the switch is closed, the voltage on the second capacitor is the same as the voltage on the first capacitor, proportional to the intensity of the light detected by the light sensor. When the switch is open, the voltage across the second capacitor is fixed at the voltage across the second capacitor when the switch was last closed.
The voltage over the first capacitor and the voltage over the second capacitor are fed to a comparator. The comparator outputs a voltage that does not change when the difference between the voltage on the first capacitor and the voltage on the second capacitor is less than a threshold amount. The comparator outputs a rising voltage when the voltage on the first capacitor is higher than the voltage on the second capacitor by at least the threshold amount. The comparator outputs a reduced voltage when the voltage on the first capacitor is lower than the voltage on the second capacitor by at least the threshold amount. When the comparator outputs a voltage without change, the event generator does not perform any operation, and the brightness of the pixel in the real scene of the light sensor is not changed. When the comparator outputs a rising voltage or a falling voltage, the event generator receives the signal output by the comparator and generates a corresponding event by combining the current time and the row value and the column value associated with the light sensor.
In this embodiment, the electronic device 100 may collect light intensity change information reflected by the target object during the brightness change process of the light source through the dynamic vision sensor 170, and process the light intensity change information reflected by the target object through the processor or the image processor to estimate the information of the light source.
The electronic device 100 implements a display function through a Graphics Processing Unit (GPU), a display screen 190, and an application processor. The GPU is a microprocessor for image processing, and is connected to a display screen 190 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 190 is used to display images, video, and the like. The display screen 190 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N1 display screens 190, with N1 being a positive integer greater than 1.
The electronic apparatus 100 may implement a photographing function through an Image Signal Processing unit (ISP), a camera 180, a GPU, a display screen 190, and an application processor.
The ISP is used to process the data fed back by the camera 180. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 180.
The camera 180 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the processor 110 may trigger the activation of the camera 180 according to a program or an instruction in the internal memory 121, so that the camera 180 acquires at least one image, and performs corresponding processing on the at least one image according to the program or the instruction, such as rotation blurring removal of the image, translation blurring removal of the image, demosaicing, denoising or enhancing processing, image post-processing, and the like. After processing, the processed image may be displayed by the display screen 190.
In some embodiments, the electronic device 100 may include 1 or N2 cameras 180, with N2 being a positive integer greater than 1. For example, the electronic device 100 may include at least one front-facing camera and at least one rear-facing camera. For example, the electronic device 100 may also include a side camera. In one possible implementation, the electronic device may include 2 rear cameras, e.g., a main camera and a tele camera; alternatively, the electronic device may include 3 rear cameras, e.g., a main camera, a wide camera, and a tele camera; alternatively, the electronic device may include 4 rear cameras, e.g., a main camera, a wide camera, a tele camera, and a mid camera.
In this embodiment of the application, after the process of changing the brightness of the light source is finished, the electronic device 100 may capture a scene image of a scene to which the target object belongs through the camera 180, and add a virtual object to the captured scene image through the processor 110 according to information of the light source.
In some embodiments, the electronic device 100 may not include the dynamic vision sensor 170. The camera 180 of the electronic device 100 may employ a high-time-resolution vision sensor such as the above-mentioned high-speed camera, high-speed video camera, ultra-high-speed video camera, or the like. In the process of changing the brightness of the light source, the electronic device 100 may collect light intensity change information reflected by the target object through the camera 180, and process the light intensity change information reflected by the target object through the processor or the image processor to estimate information of the light source. After the process of changing the brightness of the light source is finished, the camera 180 may capture a scene image of a scene to which the target object belongs, and the processor 110 may add a virtual object to the captured scene image according to the information of the light source.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. Wherein the storage program area may store an operating system, an application program (such as a camera application) required for at least one function, and the like. The storage data area may store data created during the use of the electronic device 100 (such as images of a scene captured by a camera), and the like. In addition, the internal memory 121 may include a high speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a Universal Flash Storage (UFS), and the like. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor. The internal memory 121 may further store corresponding data of the neural network model provided in the embodiment of the present application. The internal memory 121 may further store a code for performing corresponding processing on the light intensity variation information output from the dynamic vision sensor 170. When the code stored in the internal memory 121 for performing the corresponding processing on the light intensity variation information is executed by the processor 110, the information of the light source may be estimated based on the light intensity variation information. Of course, the corresponding data of the neural network model provided in the embodiment of the present application, and the code for performing corresponding processing on the light intensity variation information may also be stored in the external memory. In this case, the processor 110 may execute the code stored in the external memory for performing the corresponding processing on the light intensity variation information through the external memory interface 120 to realize the function of estimating the information of the light source according to the light intensity variation information.
The electronic device 100 may also include keys, including, for example, a power on key, a volume key, and the like. The keys may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
In the brightness change process of the light source, the light intensity change information reflected by the target object is collected through the vision sensor, and the information of the light source is estimated according to the light intensity change information reflected by the target object. The information of the light source may include position information of the light source or intensity information of the light source.
The light intensity variation information reflected by the target object includes a plurality of luminance data. Because the duration of the brightness rise process from dark to bright of the light source is very short, the vision sensor with low time resolution cannot acquire a plurality of brightness data at the moment of the light source. Therefore, the embodiment of the application adopts a high-time-resolution vision sensor, which can capture the brightness change process of the surface of the target object at the moment when the light source appears, and acquire data for multiple times in the process to obtain multiple brightness data. The light source information is estimated based on a plurality of collected brightness data which can represent the brightness change process of the surface of the target object in the process of increasing the brightness of the light source at the moment of the occurrence of the light source, and compared with the method for determining the light source information based on a single image collected after the occurrence of the light source, the accuracy of the obtained light source information can be greatly improved.
The light source estimation method provided by the embodiments of the present application is described in detail below with reference to the accompanying drawings and specific embodiments. Fig. 2 shows a flowchart of an event data processing method according to an embodiment of the present application. As shown in fig. 2, the method may include the steps of:
s201, obtaining light intensity change information reflected by the target object through a vision sensor.
Wherein the target object is in a stationary state. In the process of changing the brightness of the light source, the light intensity change information generated by reflecting light emitted by the light source by the target object is obtained through the vision sensor.
In one embodiment, a dynamic vision sensor may be used to obtain information about the intensity of light reflected from the target object. For example, during the brightness change of the light source, scene brightness data of the set scene is collected through the dynamic vision sensor, and the scene brightness data is used for representing the brightness change condition of the surfaces of various objects in the set scene, which is influenced by the light source.
The target object may be any one of the respective objects included in the setting scene. The position of the target object in the set scene is fixed, and the coordinate position of the target object can be marked in advance. Each scene brightness data collected by the dynamic vision sensor comprises the coordinate position of the pixel point, and the brightness data of the set coordinate position is extracted from the obtained scene brightness data. The set coordinate position is a coordinate position or area corresponding to the target object, and the brightness data of the set coordinate position is used for representing the brightness change condition of the surface of the target object in the set scene. And obtaining the light intensity change information reflected by the target object according to the brightness data of the set coordinate position.
In other embodiments, a high-speed camera or other high-time resolution vision sensor may be used to obtain the light intensity variation information reflected by the target object. For example, in the process of brightness change of a light source, a plurality of images of a set scene are acquired through a high-speed camera, and under the influence of the light source, the brightness of the surface of each object in the set scene is different in each image, which is specifically shown in the way that the brightness of corresponding pixel points is different when the same object is in different images.
The target object may be any one of the respective objects included in the setting scene. The position of the target object in the set scene is fixed, and the coordinate position of the target object in the picture shot by the high-speed camera can be marked in advance. The shooting angle of the high-speed camera is unchanged, a plurality of images of a set scene are collected in the process of changing the brightness of a light source, a target image area of a set coordinate position is extracted from each image, and the set coordinate position is the coordinate position corresponding to a target object. And comparing the pixel values of corresponding pixel points of the target image areas in every two adjacent images to obtain the light intensity change information reflected by the target object.
And S202, estimating information of the light source according to the light intensity change information.
The information of the light source may include position information of the light source or intensity information of the light source; alternatively, the information of the light source may include position information of the light source and intensity information of the light source. The information of the light source may be estimated based on the light intensity variation information and a luminance variation curve of the light source, which is preset according to the type of the light source.
In reality, due to factors such as the material of the light source, the response time, etc., the light source is turned on for a period of time, which can be described as the brightness rise time. The brightness change curve of the light source in the brightness rise time is related to the type of the light source, and the brightness change curves of different types of light sources are different due to different light source materials, response times and the like. The brightness variation curve may be pre-established for different types of Light sources, for example, for various types of Light sources such as incandescent lamps, fluorescent lamps, light Emitting Diodes (LEDs), and the like, corresponding brightness variation curves are respectively established.
In some embodiments, the position information of the light source may include direction information of the light source. When the information of the light source is estimated, the direction information of the light source can be estimated according to the light intensity change information, and then the light intensity change information is processed through the neural network model to obtain the intensity information of the light source, wherein the neural network model is obtained by taking sample brightness data as input and light source intensity corresponding to the sample brightness data as output for training, and the light source intensity is a light intensity parameter in a brightness change curve. When estimating the direction information of the light source, the direction information may be based on the luminance change curve of the light source or may not be based on the luminance change curve of the light source.
In some other embodiments, the position information of the light source may include direction information of the light source. When the information of the light source is estimated, the light intensity change information can be processed through the neural network model to obtain the intensity information of the light source, and then the direction information of the light source is estimated according to the light intensity change information. The neural network model is obtained by training with sample brightness data as input and light source intensity corresponding to the sample brightness data as output, wherein the light source intensity is a light intensity parameter in a brightness change curve.
In some other embodiments, the position information of the light source may include direction information of the light source. When estimating the information of the light source, the direction information of the light source may be estimated according to the light intensity variation information, and then the intensity information of the light source may be estimated according to the direction information of the light source, the light intensity variation information, and the luminance variation curve of the light source.
In some other embodiments, the position information of the light source may include distance information of the light source. When the information of the light source is estimated, the light intensity change information can be processed through the neural network model, and the distance information and the intensity information of the light source are obtained. The neural network model is obtained by taking sample brightness data as input and taking the light source distance and the light source intensity corresponding to the sample brightness data as output for training.
In some other embodiments, the position information of the light source may include direction information and distance information of the light source. When the information of the light source is estimated, the direction information of the light source can be estimated according to the light intensity change information, and then the light intensity change information is processed through the neural network model to obtain the distance information of the light source, wherein the neural network model is obtained by taking the sample brightness data as input and taking the light source distance corresponding to the sample brightness data as output for training.
In some other embodiments, the position information of the light source may include direction information and distance information of the light source. When the information of the light source is estimated, the light intensity change information may be processed through a neural network model to obtain distance information of the light source, and then the direction information of the light source is estimated according to the light intensity change information, wherein the neural network model is obtained by training with sample brightness data as input and light source distance corresponding to the sample brightness data as output.
In some other embodiments, the position information of the light source may include direction information and distance information of the light source. When estimating the information of the light source, the direction information of the light source may be estimated according to the light intensity variation information, and then the distance information of the light source may be estimated according to the direction information of the light source, the light intensity variation information, and the luminance variation curve of the light source.
In some other embodiments, the position information of the light source may include direction information and distance information of the light source. When the information of the light source is estimated, the direction information of the light source can be estimated according to the light intensity change information, and then the light intensity change information is processed through the neural network model to obtain the distance information of the light source and the intensity information of the light source, wherein the neural network model is obtained by taking the sample brightness data as input and training by taking the light source distance and the light source intensity corresponding to the sample brightness data as output, and the light source intensity is a light intensity parameter in a brightness change curve.
In some other embodiments, the position information of the light source may include direction information and distance information of the light source. When the information of the light source is estimated, the light intensity change information may be processed through a neural network model to obtain distance information of the light source and intensity information of the light source, and then the direction information of the light source is estimated according to the light intensity change information, wherein the neural network model is obtained by taking sample luminance data as input and taking the light source distance and the light source intensity corresponding to the sample luminance data as output for training.
In some other embodiments, the position information of the light source may include direction information and distance information of the light source. When estimating the information of the light source, the direction information of the light source may be estimated according to the light intensity variation information, and then the distance information and the intensity information of the light source may be estimated according to the direction information of the light source, the light intensity variation information, and the luminance variation curve of the light source.
Alternatively, after estimating the information of the light source from the light intensity change information, a virtual object may be added to a scene image collected for a scene to which the target object belongs, from the information of the light source. The scene image may be acquired after the brightness of the light source is no longer changed, and the scene image may include the target object or may not include the target object.
The visual sensor for collecting the scene image and the visual sensor for collecting the light intensity change information of the target object can be the same visual sensor, for example, in the brightness change process of the light source, the light intensity change information of the target object can be collected by adopting a high-speed camera, and after the brightness change process of the light source is finished, the scene image is collected by adopting the high-speed camera.
The visual sensor for collecting the scene image and the visual sensor for collecting the light intensity change information of the target object may not be the same visual sensor. For example, in the process of changing the brightness of the light source, a dynamic vision sensor may be used to collect the light intensity change information of the target object, and after the process of changing the brightness of the light source is finished, a traditional camera is used to collect the scene image. At this time, the shooting angles of the dynamic vision sensor and the traditional camera are the same or similar, for example, the dynamic vision sensor and the traditional camera are adjacently installed on the same electronic device.
According to the embodiment of the application, the information of the light source is estimated according to the collected light intensity change information reflected by the target object in the brightness change process of the light source, so that the accuracy of the obtained light source information can be greatly improved. According to the information of the light source with higher accuracy, the virtual object is added into the scene image, so that the virtual object can be more attached to a real scene, and the generated visual effect is more real.
In order to facilitate understanding of the light source estimation method provided in the embodiment of the present application, the following takes collecting luminance change information by a dynamic vision sensor as an example, and the light source estimation method is described in detail by two specific embodiments.
In one embodiment, as shown in fig. 3, the illuminant estimation method can include the following steps:
s301, in the process of increasing the brightness of the target light source of the set scene, acquiring scene brightness data of the set scene through a dynamic vision sensor.
For example, the setting scene may be a real scene in a room, and the target light source may be a fluorescent lamp in the room. The real scene includes a desktop, and a sphere and a bottle placed on the desktop, as shown in fig. 4. The electronic device provided with the dynamic vision sensor and the traditional camera is placed at a designated position, and the designated position can be any indoor position, which is not limited in the embodiment of the application.
The electronic equipment acquires scene brightness data in a set scene through the dynamic vision sensor, and the dynamic vision sensor captures brightness change of the surfaces of objects in the set scene, so that when the fluorescent lamp is in a closed state, the brightness of the surfaces of the objects in the set scene is unchanged, and the dynamic vision sensor cannot acquire the data. At the moment when the fluorescent lamp is turned on, the brightness of the fluorescent lamp rises from the lowest value to the highest value, and a series of changes occur in the brightness of the surfaces of all objects in the set scene. In the process, the dynamic vision sensor acquires a series of brightness change signals to generate a plurality of scene brightness data for representing the brightness change condition of the surface of each object in the set scene. After the brightness of the fluorescent lamp rises to the maximum value, the brightness of the surface of each object in the scene is set to be unchanged, and at the moment, the dynamic vision sensor cannot acquire scene brightness data.
Therefore, the dynamic vision sensor can acquire scene brightness data of the set scene in the process of brightness rise of the target light source, and the scene brightness data are used for representing the brightness change condition of the surface of each object in the set scene.
S302, extracting brightness data of a set coordinate position corresponding to the target object from the scene brightness data, and obtaining light intensity change information reflected by the target object according to the extracted brightness data.
The target object may be any one of the respective objects included in the setting scene. One object with a known normal vector direction is selected from all objects in the set scene to serve as a target object, the position of the target object in the set scene is fixed, and the coordinate position of the target object can be marked in advance. Each scene brightness data acquired by the dynamic vision sensor comprises the coordinate position of a pixel point, and a plurality of brightness data of a set coordinate position are extracted from the acquired scene brightness data to obtain a plurality of brightness data, wherein the set coordinate position is the coordinate position corresponding to the target object. And setting a plurality of brightness data of the coordinate position, wherein the brightness data are used for representing the brightness change condition of the surface of the target object in the set scene, and the plurality of brightness data comprise the brightness data of each pixel point corresponding to the surface of the target object. And obtaining light intensity change information reflected by the target object according to the plurality of brightness data of the set coordinate positions.
S303, determining a target brightness area formed by the target object reflecting the light emitted by the target light source according to the light intensity change information.
In this embodiment, a spherical body having a diffuse reflection characteristic is selected as an example of the target object. Specifically, the target luminance area formed by the sphere reflecting light emitted from the target light source may be determined as follows:
Figure RE-GDA0003375273970000141
Figure RE-GDA0003375273970000142
wherein, t 0 Indicating the starting time, t, of the brightness rise process of the target light source n Indicating the end time of the brightness rise process of the target light source. I is e (x, t) represents the brightness change condition of the sphere surface, and the brightness change condition can be represented by the brightness change amplitude of each pixel point corresponding to the sphere surface; i is 1 The brightness variation amplitude of each pixel point is t 0 To t n The integral value in the time period represents the brightness change value of each pixel point in the brightness rising process of the target light source, and each pixel point corresponds to each position point on the surface of the sphere. I is 0 Is a set threshold. B (I) 1 ) Representing a target brightness area formed by the sphere reflecting light emitted by the target light source; for any position point on the surface of the sphere, if the brightness change value I of the target light source is in the process of brightness rise of the target light source 1 Greater than a set threshold I 0 If the position point is illuminated by the target light source, the value is 1; otherwise, the position point is not illuminated by the target light source and is assigned with 0; the area finally assigned 1 is the target luminance area.
The brightness change condition I of the sphere surface in the process of the brightness rise of the target light source e (x, t) can be expressed as:
Figure RE-GDA0003375273970000151
wherein e (x, t) is brightness data of the sphere surface extracted from scene brightness data output by the dynamic vision sensor, namely light intensity change information reflected by the target object; exp () is an exponential function with a natural constant e as the base; c is a set value; t is an element of [ t ] 0 ,t n ]Indicating the brightness ramp-up process of the target light source.
S304, acquiring N brightness area models in different directions, and respectively determining reference brightness areas formed by the target object reflecting light emitted by the corresponding reference light source according to the brightness area models in all directions.
And acquiring N brightness region models respectively established aiming at different directions. In the N brightness area models, different brightness area models correspond to reference light sources in different directions, and N is a positive integer. The first brightness region model is used for representing the brightness change condition of the surface of the target object in the brightness rising process of the reference light source in the first direction; in other words, the first luminance region model is used to indicate a reference luminance region formed by the surface of the target object reflecting the light emitted by the reference light source during the luminance increase of the reference light source in the first direction. The first direction refers to a direction of the reference light source relative to the target object as a first direction, and the first luminance region model is any one of the N luminance region models.
Alternatively, for the first luminance region model, the determined reference luminance region may be expressed as:
Figure RE-GDA0003375273970000152
Figure RE-GDA0003375273970000153
wherein, B (I) 2 ) Indicating a reference brightness area formed by the sphere reflecting light emitted by the reference light source in the first direction. I.C. A 2 Is a first brightness region model, which adopts the reference brightness data of each pixel point at t 0 To t n An integral value in a time period represents a brightness change value of each pixel point in the brightness rising process of the reference light source; each pixel point corresponds to each position point on the surface of the sphere. First luminance region model I 2 The brightness change value of each pixel point is determined according to the light propagation model I 2 (x, t) determined by 2 (x, t) is the light propagation mode corresponding to the reference light source in the first directionAnd (4) molding.
By way of example, knowing a normal vector of a plane where each position point of the surface of the sphere with the diffuse reflection characteristic is located, a corresponding light propagation model may be established for any one reference light source with a set direction, a set distance, and a set intensity, and then a light propagation model I (x, t) corresponding to any one reference light source with a set direction, a set distance, and a set intensity may be represented as:
Figure RE-GDA0003375273970000154
wherein, I (x, t) represents the brightness change condition of the reference brightness region corresponding to the sphere in the brightness increasing process of the reference light source, or, in other words, represents the brightness data of each pixel point corresponding to the surface of the sphere in the brightness increasing process of the reference light source, that is, the brightness change information of each pixel point in the reference brightness region formed by the reflection of the light emitted by the reference light source by the surface of the sphere, each pixel point corresponds to each position point on the surface of the sphere, x represents the coordinate of the pixel point with the changed brightness, and t represents the time when the brightness changes; ρ is the reflection coefficient of a spherical surface having a diffuse reflection characteristic, and ρ is a known constant.
n l The direction of the reference light source corresponding to the sphere is shown as a set direction; n is a radical of an alkyl radical x The normal vector direction of the plane where any position point of the surface of the sphere is located is represented; < n l ,n x >The included angle between the normal vector direction corresponding to any position point on the surface of the sphere and the direction of the reference light source is represented; for example, < n for any one location point on the surface of a sphere l ,n x >The included angle between the direction of the incident light of the light emitted by the reference light source and directed to the position point and the corresponding normal vector direction of the position point can be understood. If the value of the included angle is a positive value, it indicates that the light emitted by the reference light source can illuminate the position point, for example, for a position point of the sphere facing the reference light source, the light emitted by the reference light source can illuminate the position point; if the value of the included angle is negative, it indicates that the light emitted from the reference light source cannot illuminate the position point, such as forAt a position point of the sphere opposite to the reference light source, the light emitted by the reference light source cannot illuminate the position point. max (< n) l ,n x >0) represents that for a position point of the sphere surface which can be illuminated by a reference light source, the brightness change information at each moment is determined according to the included angle between the direction of the incident light of the position point and the corresponding normal vector direction of the position point and the brightness change curve corresponding to the reference light source; and for the position points of the sphere surface which can not be illuminated by the reference light source, the brightness change information value is 0 at each moment.
L (t) represents the intensity information of the reference light source when the reference light source is incident on the surface of the sphere, and is determined according to the brightness change curve corresponding to the reference light source with set intensity and the distance between the reference light source and the sphere; d represents the distance between the reference light source and the sphere, which is a set distance; phi (t) is a brightness change curve corresponding to the reference light source.
Ideally, the brightness variation curve of the light source at the instant when the light source is turned on can be described by a step function. However, in reality, the process of turning on the reference light source is usually continued for a while due to the influence of the light source material of the reference light source, the response time, and the like, and can be described by the brightness rising time or the brightness rising process. During the brightness rising process, the brightness change of the reference light source can be represented by a brightness change curve. The brightness variation curves of the reference light sources with different intensities are different, and the brightness variation curve of the reference light source can be expressed as follows:
Φ(t)=a*log(t+c)+b
wherein, a represents the intensity of the reference light source and is the set intensity; b is the offset of the light source brightness, and is used for making up the deviation between theoretical data and actual data; t represents each time during the brightness rise of the reference light source; and c is a time offset used for compensating the deviation between the assumed starting moment and the actual starting moment of the brightness rising process of the reference light source.
It can be seen from the luminance variation curve of the reference light source that the intensity of the reference light source is different at different times during the luminance rise of the reference light source. Therefore, at different times during the brightness increase process of the reference light source, the intensity information of the reference light source incident on the sphere surface is also different, and the brightness change information of each pixel point in the reference brightness region formed by the reflection of the light emitted by the reference light source on the sphere surface is also changed along with the time.
I 2 (x, t) is any one of the light propagation models corresponding to the reference light source in the first direction selected from the respective light propagation models. Due to the first luminance region model I 2 According to the reference brightness data of each pixel point in the light propagation model in the first direction at t 0 To t n The integration process can eliminate the influence of reference light sources with different distances and different intensities on the obtained reference brightness area. Therefore, selecting any one of the light propagation models with different distances and different intensities corresponding to the reference light source in the first direction can obtain the approximately same reference brightness region. That is, the light propagation models corresponding to the reference light sources with different distances and different intensities do not substantially affect the determination of the reference luminance area formed by the sphere reflecting the light emitted from the reference light source, and the reference luminance area B (I) formed by the sphere reflecting the light emitted from the reference light source in the first direction can be determined according to any one of the light propagation models corresponding to the reference light source in the first direction 2 )。
The first direction is any one of N different directions, and through the above process, the reference luminance regions corresponding to the luminance region models in the N different directions can be determined respectively.
S305, comparing the target luminance region with reference luminance regions corresponding to the N luminance region models in different directions, and using a luminance region model with the largest overlapping region as the target luminance region model.
And S306, taking the direction of the reference light source of the target brightness region model as the light source direction of the target light source relative to the target object.
And comparing the target brightness region with the reference brightness regions corresponding to the N brightness region models in different directions respectively to determine the overlapping region of the target brightness region and each reference brightness region. And taking the direction of the reference light source of the brightness area model with the maximum overlapping area as the light source direction of the target light source relative to the target object. The above process may also be understood as comparing the target luminance region with the reference luminance regions corresponding to the N luminance region models in different directions, respectively, to determine the difference between the target luminance region and each of the reference luminance regions. Taking the direction of the reference light source of the luminance region model with the minimum difference as the light source direction of the target light source relative to the target object, the process may be expressed as:
Figure RE-GDA0003375273970000171
further, the formula can also be expressed as:
Figure RE-GDA0003375273970000172
wherein S is a union of the target luminance region and the reference luminance region, on which a direction n of a reference light source of a luminance region model in which a difference between the target luminance region and the reference luminance region is minimum l Is the light source direction of the target light source relative to the target object.
S307, the light intensity change information is processed through the neural network model, and the light source distance of the target light source relative to the target object and the light source intensity of the target light source are obtained.
Alternatively, as shown in fig. 5, the light intensity variation information includes a plurality of luminance data, and the preprocessing may include discretizing the plurality of luminance data in a time dimension and converting the discretized data into discretized data. For example, the plurality of luminance data may be converted into 128 × 12 discretization data, which may also be referred to as tensors. And inputting the discretization data into the neural network model, and obtaining the light source distance of the target light source output by the neural network model relative to the target object and the light source intensity of the target light source.
Illustratively, the neural network model may include a feature extraction network and a fully connected layer connected to the feature extraction network. The feature extraction network may employ the network structure of ResNet 50. ResNet 50 is a deep learning convolutional neural network that is used to extract features of the input data. The neural network model shown in fig. 5 is provided with four parallel fully-connected layers, and each fully-connected layer is connected with the feature extraction network. The first full-link layer outputs the light source intensity a in the brightness change curve, namely the light source intensity of the target light source, the second full-link layer outputs the offset b in the brightness change curve, the third full-link layer outputs the time offset c in the brightness change curve, and the fourth full-link layer outputs the light source distance d of the target light source relative to the target object.
The neural network model is obtained by taking sample brightness data as input and taking the light source distance and the light source intensity corresponding to the sample brightness data as output for training. The sample brightness data is brightness data of the surface of the target object obtained in the process of brightness rise of the reference light source.
When training a neural network model, a training data set is obtained first, the training data set includes sample luminance data corresponding to a plurality of different scenes, and a light source of each scene is a light source with known light source intensity, that is, each parameter in a luminance change curve of the light source of each scene is known. And the sample brightness data corresponding to each scene is marked with a corresponding distance label, an offset label, a time offset label and an intensity label. The distance label is used for marking the distance between the target object corresponding to the sample brightness data and the reference light source. The offset label, the time offset label and the intensity label are respectively an offset, a time offset and an intensity in a brightness change curve of the reference light source corresponding to the sample brightness data.
Randomly extracting sample brightness data corresponding to a scene from a training data set, discretizing the extracted sample brightness data on a time dimension, inputting the discretized data into a neural network model to be trained, and obtaining the prediction strength, the prediction offset, the prediction time offset and the prediction distance output by the neural network model. Determining a first loss value according to the predicted intensity output by the neural network model and the intensity label of the sample brightness data; determining a second loss value according to the predicted offset output by the neural network model and the offset label of the sample brightness data; determining a third loss value according to the predicted time offset output by the neural network model and the time offset label of the sample brightness data; and determining a fourth loss value according to the predicted distance output by the neural network model and the distance label of the sample brightness data. A total loss value is determined based on the first loss value, the second loss value, the third loss value, and the fourth loss value. And judging whether the neural network model converges according to the total loss value. And if the total loss value of the neural network model converges to the preset expected value, or the variation amplitude of the total loss value converges to the preset expected value, the neural network model is considered to be converged. Otherwise, adjusting the network parameters of the neural network model according to the total loss value until the neural network model converges, and taking the current network parameters as the network parameters of the neural network model to obtain the trained neural network model.
The discretization data of a plurality of brightness data corresponding to the target light source are input into the trained neural network model, so that the light source intensity, the offset, the time offset and the light source distance output by the neural network model can be obtained, and the light source distance of the target light source relative to the target object and the light source intensity of the target light source can be determined.
In some embodiments, the above step S307 may also be performed before step S304.
And S308, acquiring a scene image for the set scene through a traditional camera after the brightness rising process of the target light source of the set scene is finished.
And S309, adding a virtual object to the acquired scene image according to the determined light source direction, light source distance and light source intensity.
Because the traditional camera and the dynamic vision sensor are installed on the same electronic equipment, the angle of the dynamic vision sensor for acquiring scene brightness data is basically consistent with the angle of the traditional camera for acquiring images. Therefore, according to the light source direction, the light source distance and the light source intensity determined in the process, the virtual object is added into the acquired scene image, so that the virtual object can be more attached to a real environment, and the generated visual effect is more real.
In another embodiment, as shown in fig. 6, the illuminant estimation method may include the steps of:
s601, in the process of increasing the brightness of the target light source of the set scene, acquiring scene brightness data of the set scene through a dynamic vision sensor.
S602, extracting brightness data of a set coordinate position corresponding to the target object from the scene brightness data, and obtaining light intensity change information reflected by the target object according to the extracted brightness data.
As in the above embodiment, the setting scene may be an indoor real scene, and the target light source may be an indoor fluorescent lamp. The real scene comprises a desktop, a sphere and a bottle, wherein the sphere and the bottle are placed on the desktop, the sphere is selected as a target object, and a plurality of brightness data of a set coordinate position corresponding to the sphere are obtained. And obtaining the light intensity change information reflected by the target object according to the plurality of brightness data of the set coordinate positions. The specific process of acquiring the plurality of luminance data may be performed with reference to the above embodiment, and will not be described herein again.
S603, determine a target luminance region model from the N luminance region models.
In the N brightness area models, different brightness area models correspond to reference light sources in different directions. The first brightness area model is used for indicating a reference brightness area formed by reflecting light emitted by a reference light source on the surface of a target object in the process of increasing the brightness of the reference light source in the first direction; the direction of the reference light source relative to the target object is a first direction, the first luminance region model is any one of the N luminance region models, and the first direction is any one of N different directions.
From the N luminance region models, a target luminance region model is determined, which is a luminance region model in which a difference between the reference luminance region and the target luminance region is smallest.
Optionally, an initial direction input by the user may be received, the initial direction being one of the N directions. And taking the initial direction as a direction to be matched, and acquiring a brightness region model corresponding to the reference light source in the direction to be matched. And determining the difference between the target brightness region and the reference brightness region according to the brightness data acquired aiming at the target light source and the brightness region model corresponding to the reference light source in the direction to be matched. The target brightness area is determined according to the light intensity change information reflected by the target object and is a brightness area formed by the target object reflecting light emitted by the target light source; the reference brightness region is determined according to the brightness region model, and is a brightness region formed by the target object reflecting light emitted by the reference light source in the direction to be matched. And updating the direction to be matched according to the obtained difference by a gradient descent method to obtain a new direction to be matched. And acquiring a brightness region model corresponding to the new reference light source in the direction to be matched, and determining the difference between the target brightness region and a reference brightness region corresponding to the brightness region model in the new direction to be matched according to the steps. And repeatedly executing the iteration processes to gradually reduce the difference obtained in each iteration process until the change of the difference determined for adjacent L1 times is smaller than the set expected value, wherein L1 is a positive integer. A luminance region model is arbitrarily selected from the luminance region models of the L1 times, or a luminance region model of the last iteration is taken as a target luminance region model. The N luminance region models are formed by the respective luminance region models used in the iterative process.
And S604, taking the direction of the reference light source of the target brightness area model as the light source direction of the target light source relative to the target object.
S605, a target luminance change model is determined from the M luminance change models for the light source direction of the target light source.
In the M brightness change models, different brightness region models correspond to reference light sources with different distances and different intensities. The first brightness change model is used for indicating a reference brightness area formed by reflecting light emitted by a reference light source on the surface of the target object in the process of increasing the brightness of the reference light source at a first distance and a first intensity; the distance between the reference light source and the target object is a first distance, and the intensity of the reference light source is a first intensity; the first luminance change model is any one of M luminance change models.
From the M luminance change models, a target luminance change model is determined. The target luminance change model is a luminance change model in which a difference between luminance change magnitudes of the reference luminance region and the target luminance region is the smallest among the M luminance change models.
Optionally, an initial distance and an initial intensity input by a user may be received. And acquiring a brightness change model corresponding to a reference light source with the direction as the light source direction of the target light source and the distance as the initial distance and the intensity as the initial intensity, and taking the brightness change model as a brightness change model to be matched. The brightness change model to be matched is one of the M brightness change models. And determining the difference between the brightness change amplitude of the target brightness region and the brightness change amplitude of the reference brightness region according to the plurality of brightness data acquired aiming at the target light source and the brightness change model to be matched. Wherein, the brightness change amplitude of the target brightness area is determined according to the light intensity change information reflected by the target object; the brightness variation amplitude of the reference brightness area is determined according to the brightness variation model to be matched. And updating the distance and the intensity of the reference light source corresponding to the brightness change model to be matched by a gradient descent method according to the obtained difference to obtain a new brightness change model to be matched. And according to the steps, determining the difference between the brightness change amplitude of the target brightness area and the brightness change amplitude of the reference brightness area corresponding to the new brightness change model to be matched. And repeatedly executing the iteration process to gradually reduce the difference obtained in each iteration process until the change of the difference determined for the adjacent L2 times is smaller than the set expected value, wherein L2 is a positive integer. A luminance change model is arbitrarily selected from the luminance change models of the L2 times, or the luminance change model of the last iteration process is used as the target luminance change model. The luminance region models used in the iterative process constitute the M luminance change models.
The above iterative process may also be understood as determining a target brightness change model by a gradient descent method, which may be expressed by the following formula:
Figure RE-GDA0003375273970000201
wherein, I e (x, t) represents the brightness change condition of the sphere surface in the brightness rising process of the target light source, and the brightness change condition can be represented by the brightness change amplitude of each pixel point corresponding to the sphere surface; i (x, t) represents the luminance change condition of the reference luminance region corresponding to the sphere in the luminance increasing process of the corresponding reference light source, and the luminance change condition can be represented by the luminance change amplitude of each pixel point corresponding to the surface of the sphere. I.C. A e The determination of (x, t) and I (x, t) can be performed with reference to the previous embodiment, and will not be described herein.
S606, the distance of the reference light source of the target luminance change model is used as the light source distance of the target light source relative to the target object, and the intensity of the reference light source of the target luminance change model is used as the light source intensity of the target light source.
And S607, acquiring a scene image for the set scene through the traditional camera after the brightness rise process of the target light source of the set scene is finished.
And S608, adding a virtual object to the acquired scene image according to the determined light source direction, light source distance and light source intensity.
According to the embodiment of the application, the direction information of the light source is estimated according to the collected light intensity change information reflected by the target object in the brightness rising process of the light source, and the brightness change curves are different when different light sources appear. The intensity information of the light source and the distance information of the light source are estimated, the short-distance weak light source and the long-distance strong light source can be distinguished, and the accuracy of the obtained light source information is improved. According to the information of the light source with higher accuracy, the virtual object is added into the scene image, so that the virtual object can be more attached to a real scene, and the generated visual effect is more real.
Fig. 7 is an effect diagram of inserting a virtual object into a scene image when a target light source in a set scene is not considered, fig. 8 is an effect diagram of inserting a virtual object into a scene image according to determined light source information after light source information of the target light source is determined by a conventional method, and fig. 9 is an effect diagram of inserting a virtual object into a scene image according to determined light source information after light source information of the target light source is determined by a method provided by an embodiment of the present application.
As can be seen from fig. 7, the real objects in the scene image are illuminated by the target light source and all have shadows, while the virtual objects inserted in the scene image do not have shadows, so that the user can obviously feel that the virtual objects are not the objects in the real scene, but the inserted objects are edited at a later stage. As can be seen from fig. 8, although the virtual object inserted into the scene image has a shadow, the shadow tilt angle is clearly different from the real object in the scene image, and also gives the user a very unreal feeling. The angle and size of the shadow of the virtual object inserted in fig. 9 both conform to the actual situation of the shadow generated when the target light source irradiates the virtual object, so that the visual perception of the virtual object inserted in the scene image to the user is more real, just like the real object in the set scene.
Based on the same inventive concept as the method described above, as shown in fig. 10, the embodiment of the present application further provides a light source estimation device 1000. The light source estimation apparatus is applied to an electronic device with computing capability, such as the electronic device 100 shown in fig. 1, and the electronic device may include a vision sensor, which may be a high-speed camera, or a dynamic vision sensor and a conventional camera. The light source estimation device can be used for realizing the functions of the method embodiment, so that the beneficial effects of the method embodiment can be realized. The light source estimation device may include an information acquisition unit 1001 and a light source estimation unit 1002. The light source estimation device 100 is used to implement the functions in the method embodiment shown in fig. 2 described above. When the light source estimation device 1000 is used to implement the functionality of the method embodiment shown in fig. 2: the information acquisition unit 1001 may be configured to perform S201, and the light source estimation unit 1002 may be configured to perform S202.
Such as: an information acquisition unit 1001 configured to acquire light intensity change information reflected by a target object through a vision sensor, where the target object is in a stationary state;
a light source estimating unit 1002, configured to estimate information of the light source according to the light intensity variation information, where the information of the light source includes position information of the light source and/or intensity information of the light source.
In a possible implementation, as shown in fig. 11, the illuminant estimation device 1000 may further include an image processing unit 1003. An image processing unit 1003, configured to add a virtual object to a scene image captured for a scene to which the target object belongs according to the information of the light source.
In one possible implementation, the vision sensor may comprise a dynamic vision sensor.
In a possible implementation manner, the information obtaining unit 1001 may specifically be configured to: acquiring scene brightness data acquired by a dynamic vision sensor in a light source brightness change process; and extracting the brightness data of the coordinate position calibrated in advance from the scene brightness data, and obtaining the light intensity change information reflected by the target object according to the obtained brightness data.
In a possible implementation, the light source estimation unit 1002 may be specifically configured to:
and estimating the information of the light source according to the light intensity change information and the brightness change curve of the light source, wherein the brightness change curve of the light source is preset according to the type of the light source.
In one possible embodiment, the position information of the light source includes direction information or distance information of the light source; the light source estimating unit 1002 may specifically be configured to: and estimating the distance information and/or the intensity information of the light source according to the light intensity change information and the brightness change curve of the light source.
In a possible implementation, the light source estimation unit 1002 may be specifically configured to: and estimating the direction information of the light source according to the light intensity change information and the brightness change curve of the light source.
In a possible implementation, the light source estimation unit 1002 may be specifically configured to: processing the light intensity change information through a neural network model to obtain distance information and/or intensity information of the light source, wherein the neural network model is obtained by taking sample brightness data as input and taking the light source distance and/or light source intensity corresponding to the sample brightness data as output for training; the light source intensity is a light intensity parameter in the brightness variation curve.
In a possible implementation, the light source estimation unit 1002 may be specifically configured to: and estimating the distance information and/or the intensity information of the light source according to the direction information and the light intensity change information of the light source and the brightness change curve of the light source.
In the brightness change process of the light source, the light intensity change information reflected by the target object is acquired through the vision sensor. The information of the light source is estimated according to the light intensity change information, and the information of the light source is estimated based on a single image acquired after the light source appears, so that the accuracy of the obtained light source information can be greatly improved.
Based on the same inventive concept as the method described above, a light source estimation system is further provided in the embodiment of the present application, and as shown in fig. 12, a processor 1201 and a visual sensor 1202 are included in the light source estimation system 1200. The processor 1201 and the vision sensor 1202 may be provided in the same electronic apparatus or may be provided in different electronic apparatuses. And the vision sensor 1202 is used for acquiring light intensity change information reflected by the target object, wherein the target object is in a static state. The vision sensor 1202 may be a dynamic vision sensor, or a high-time-resolution vision sensor such as a high-speed camera, a high-speed video camera, or an ultra-high-speed video camera. For a more detailed description of the dynamic vision sensor, reference is made to the above description of the dynamic vision sensor 170 shown in fig. 1, and a detailed description thereof is omitted here. The processor 1201 is coupled to the vision sensor 1202 and is configured to perform the method illustrated in fig. 2.
In some embodiments, the light source estimation system 1200 may further include a memory for storing instructions or programs for execution by the processor 1201, or for storing input data required for the processor 1201 to execute the instructions or programs, or for storing data generated by the processor 1201 after executing the instructions or programs. Processor 121 is configured to execute instructions or programs stored in the memory to perform the functions of the method embodiment shown in fig. 2. For example, when the light source estimation system 1200 is used to implement the method shown in fig. 2, the processor 1201 is configured to perform the functions of the information acquisition unit 1001 and the light source estimation unit 1002. For example, the information acquiring unit 1001 may acquire, by the processor 1201, light intensity variation information reflected by the target object acquired by the vision sensor 1202, where the target object is in a stationary state, by calling a program or an instruction stored in the memory. The light source estimating unit 1002 may be configured to estimate the light source information by processing the light intensity variation information reflected by the target object by the processor 1201 by calling a program or an instruction stored in the memory.
It is noted that in some embodiments, if the processor 1201 and the vision sensor 1202 are provided in different electronic devices. The electronic device may interface with the processor 1201 and the visual sensor 1202 by connecting the visual sensor 1202 through the visual sensor interface when use of the visual sensor 1202 is desired. In other embodiments, the electronic device may further obtain, through a network or other means, information about the light intensity variation reflected by the target object to be processed, where the information about the light intensity variation is collected by the vision sensor 1202 and stored in a server or other storage medium of the network.
It is understood that the Processor 1201 in the embodiments of the present application may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. The general purpose processor may be a microprocessor, but may be any conventional processor.
The method steps in the embodiments of the present application may be implemented by hardware, or may be implemented by software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in Random Access Memory (RAM), flash memory, read-only memory (ROM), programmable ROM (PROM), erasable programmable PROM (EPROM), electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable hard disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. In addition, the ASIC may reside in a terminal device. Of course, the processor and the storage medium may reside as discrete components in a terminal device.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs or instructions. When the computer program or instructions are loaded and executed on a computer, the processes or functions described in the embodiments of the present application are performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, user equipment, or other programmable device. The computer program or instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program or instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire or wirelessly. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that integrates one or more available media. The usable medium may be a magnetic medium, such as a floppy disk, a hard disk, a magnetic tape; or optical media such as Digital Video Disks (DVDs); it may also be a semiconductor medium, such as a Solid State Drive (SSD).
In the embodiments of the present application, unless otherwise specified or conflicting with respect to logic, the terms and/or descriptions in different embodiments have consistency and may be mutually cited, and technical features in different embodiments may be combined to form a new embodiment according to their inherent logic relationship. Furthermore, the terms "comprises" and "comprising," as well as any variations thereof, are intended to cover a non-exclusive inclusion, such as a list of steps or elements. The methods, systems, articles, or apparatus need not be limited to the explicitly listed steps or elements, but may include other steps or elements not expressly listed or inherent to such processes, methods, articles, or apparatus.
Although the present application has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations may be made thereto without departing from the spirit and scope of the application. Accordingly, the specification and figures are merely illustrative of the concepts defined by the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the embodiments of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to encompass such modifications and variations.

Claims (13)

1. A method of illuminant estimation, said method comprising:
acquiring light intensity change information reflected by a target object through a vision sensor, wherein the target object is in a static state;
and estimating information of the light source according to the light intensity change information, wherein the information of the light source comprises position information of the light source and/or intensity information of the light source.
2. The method of claim 1, wherein after estimating the information of the light source according to the light intensity variation information, the method further comprises:
and adding a virtual object to a scene image acquired aiming at the scene to which the target object belongs according to the information of the light source.
3. The method according to claim 1 or 2, wherein estimating information of the light source according to the light intensity variation information comprises:
and estimating the information of the light source according to the light intensity change information and the brightness change curve of the light source, wherein the brightness change curve of the light source is preset according to the type of the light source.
4. The method of claim 3, wherein the position information of the light source comprises direction information and distance information of the light source; the estimating the information of the light source according to the light intensity change information and the brightness change curve of the light source includes:
estimating the direction information of the light source according to the light intensity change information;
and estimating the distance information and/or the intensity information of the light source according to the light intensity change information and the brightness change curve of the light source.
5. The method of claim 3, wherein the position information of the light source comprises direction information and distance information of the light source; the estimating the information of the light source according to the light intensity change information and the brightness change curve of the light source includes:
estimating the direction information of the light source according to the light intensity change information and the brightness change curve of the light source;
processing the light intensity change information through a neural network model to obtain distance information and/or intensity information of the light source; the neural network model is obtained by taking sample brightness data as input and taking the light source distance and/or light source intensity corresponding to the sample brightness data as output for training.
6. The method of claim 3, wherein the position information of the light source comprises direction information and distance information of the light source; the estimating the information of the light source according to the light intensity change information and the brightness change curve of the light source comprises:
estimating direction information of the light source according to the light intensity change information and a brightness change curve of the light source;
and estimating the distance information and/or the intensity information of the light source according to the direction information of the light source, the light intensity change information and the brightness change curve of the light source.
7. The method of any one of claims 1-6, wherein the vision sensor comprises a dynamic vision sensor.
8. A luminaire estimation arrangement, characterized in that the arrangement comprises:
the information acquisition unit is used for acquiring light intensity change information reflected by a target object through a visual sensor, wherein the target object is in a static state;
and the light source estimation unit is used for estimating information of the light source according to the light intensity change information, wherein the information of the light source comprises position information and/or intensity information of the light source.
9. The apparatus of claim 8, further comprising:
and the image processing unit is used for adding a virtual object to a scene image acquired aiming at the scene to which the target object belongs according to the information of the light source.
10. The apparatus according to claim 8 or 9, wherein the illuminant estimation unit is specifically configured to:
and estimating the information of the light source according to the light intensity change information and the brightness change curve of the light source, wherein the brightness change curve of the light source is preset according to the type of the light source.
11. The apparatus of any one of claims 8-10, wherein the vision sensor comprises a dynamic vision sensor.
12. A luminaire estimation system, comprising:
the vision sensor is used for acquiring light intensity change information reflected by a target object, wherein the target object is in a static state;
a processor connected to the vision sensor and configured to perform the method of any one of claims 1-7.
13. A computer-readable storage medium having a computer program stored therein, the computer program characterized by: the computer program, when executed by a processor, implements the method of any one of claims 1 to 7.
CN202110977283.5A 2021-06-18 2021-08-24 Light source estimation method, device and system Pending CN115499576A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2021106792324 2021-06-18
CN202110679232 2021-06-18

Publications (1)

Publication Number Publication Date
CN115499576A true CN115499576A (en) 2022-12-20

Family

ID=84464396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110977283.5A Pending CN115499576A (en) 2021-06-18 2021-08-24 Light source estimation method, device and system

Country Status (1)

Country Link
CN (1) CN115499576A (en)

Similar Documents

Publication Publication Date Title
US11138434B2 (en) Electronic device for providing shooting mode based on virtual character and operation method thereof
EP3284011B1 (en) Two-dimensional infrared depth sensing
CN105933589B (en) A kind of image processing method and terminal
US11153520B2 (en) Electronic device including camera module in display and method for compensating for image around camera module
US10681287B2 (en) Apparatus and method for displaying AR object
TWI338241B (en) Interactive image system, interactive device and operative method thereof
KR20180138300A (en) Electronic device for providing property information of external light source for interest object
JP2015526927A (en) Context-driven adjustment of camera parameters
CN112005548B (en) Method of generating depth information and electronic device supporting the same
CN103679788B (en) The generation method and device of 3D rendering in a kind of mobile terminal
US11582391B2 (en) Electronic device capable of controlling image display effect, and method for displaying image
EP3349359A1 (en) Compressive sensing capturing device and method
KR20160038460A (en) Electronic device and control method of the same
JP7286208B2 (en) Biometric face detection method, biometric face detection device, electronic device, and computer program
US9280209B2 (en) Method for generating 3D coordinates and mobile terminal for generating 3D coordinates
CN110519485A (en) Image processing method, device, storage medium and electronic equipment
US11284020B2 (en) Apparatus and method for displaying graphic elements according to object
US10827125B2 (en) Electronic device for playing video based on movement information and operating method thereof
KR102524982B1 (en) Apparatus and method for applying noise pattern to image processed bokeh
US10447998B2 (en) Power efficient long range depth sensing
US9760177B1 (en) Color maps for object tracking
WO2023045724A1 (en) Image processing method, electronic device, storage medium, and program product
CN115499576A (en) Light source estimation method, device and system
JP7293362B2 (en) Imaging method, device, electronic equipment and storage medium
CN114841863A (en) Image color correction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination