CN113837990B - Noise monitoring method, electronic equipment, chip system and storage medium - Google Patents

Noise monitoring method, electronic equipment, chip system and storage medium Download PDF

Info

Publication number
CN113837990B
CN113837990B CN202110657922.XA CN202110657922A CN113837990B CN 113837990 B CN113837990 B CN 113837990B CN 202110657922 A CN202110657922 A CN 202110657922A CN 113837990 B CN113837990 B CN 113837990B
Authority
CN
China
Prior art keywords
image
target image
noise
pixel
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110657922.XA
Other languages
Chinese (zh)
Other versions
CN113837990A (en
Inventor
黄邦邦
张文礼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202110657922.XA priority Critical patent/CN113837990B/en
Publication of CN113837990A publication Critical patent/CN113837990A/en
Application granted granted Critical
Publication of CN113837990B publication Critical patent/CN113837990B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30121CRT, LCD or plasma display

Abstract

The embodiment of the application provides a noise monitoring method, electronic equipment, a chip system and a storage medium, and relates to the technical field of ambient light sensors. The method comprises the following steps: after the brightness of a display screen of the electronic equipment is changed, when the image is refreshed for the first time, acquiring a target image of a specific area from the refreshed image for the first time, and calculating according to the target image corresponding to the refreshed image for the first time and the brightness value of the display screen after the change to obtain image noise; when the image is refreshed subsequently, under the condition that the target image corresponding to the current refreshed image is the same as the target image corresponding to the last refreshed image, the image noise of the target image corresponding to the current refreshed image is not calculated; under the condition that the target image corresponding to the current refreshed image is different from the target image corresponding to the last refreshed image, calculating the image noise of the target image corresponding to the current refreshed image; in this way, the number of times image noise is calculated is reduced, thereby reducing power consumption.

Description

Noise monitoring method, electronic equipment, chip system and storage medium
Technical Field
The embodiment of the application relates to the field of ambient light sensors, in particular to a noise monitoring method, electronic equipment, a chip system and a storage medium.
Background
With the development of electronic devices, the display screen of the electronic device has a higher and higher occupancy rate. In order to pursue an excellent screen occupation ratio, an ambient Light sensor on the electronic device may be disposed below an Organic Light-Emitting Diode (OLED) screen of the electronic device. The OLED screen itself emits light, which causes the ambient light collected by the ambient light sensor disposed below the OLED screen to include the light emitted by the OLED screen itself, resulting in inaccuracy of the ambient light collected by the ambient light sensor.
In order to accurately measure the ambient light, the ambient light collected by the ambient light sensor and the noise corresponding to the light emitted by the OLED screen itself can be obtained. Then, the true ambient light is obtained based on the ambient light and noise collected by the ambient light sensor. At present, the brightness of the OLED screen is generally taken as noise corresponding to light emitted by the OLED screen itself, however, the noise obtained by this way of obtaining noise is not accurate.
Disclosure of Invention
The embodiment of the application provides a noise monitoring method, electronic equipment, a chip system and a storage medium, and solves the problem that the currently monitored noise is inaccurate.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a noise monitoring method, including:
responding to the monitored change of the brightness of the display screen of the electronic equipment, and acquiring the brightness value of the display screen;
receiving a first image;
acquiring a first target image from the first image, wherein the first target image is an image in a first area;
calculating to obtain a first image noise based on the brightness value and the first target image;
receiving a second image;
judging whether a second target image on the second image is the same as the first target image, wherein the second target image is an image in the first area;
and if the second target image is different from the first target image, calculating to obtain second image noise based on the brightness value and the second target image.
In the embodiment of the present application, the noise corresponding to the light emitted by the display screen actually includes not only the interference of the brightness of the display screen to the collected ambient light, but also the interference of the image displayed by the display screen to the collected ambient light. Therefore, the embodiment of the application obtains the image noise through the brightness of the display screen and the image calculation of the partial area above the ambient light sensor in the display screen. Under the condition that the brightness value of the display screen is unchanged and the target image is unchanged, the noise corresponding to the light emitted by the display screen is unchanged, so that if the target images acquired in the previous and subsequent times are different, the second image noise is calculated and obtained based on the brightness value and the target image acquired in the subsequent time to obtain the image noise. By the method, the noise influencing the ambient light collected by the ambient light sensor can be accurately obtained.
As an implementation manner of the first aspect, after determining whether the second target image on the second image is the same as the first target image, the method further includes:
and if the second target image is the same as the first target image, stopping calculating and obtaining the second image noise based on the brightness value and the second target image.
In the embodiment of the present application, in the case where the target images obtained at two previous and subsequent times are the same, the noise corresponding to the light emitted from the display screen is unchanged, and therefore, the calculation of the second image noise based on the luminance value and the second target image may be stopped. In this way, power consumption when noise is obtained can be reduced.
As an implementation manner of the first aspect, the determining whether the second target image on the second image is the same as the first target image includes:
storing a third image on the second image that includes the second target image in a write-back memory;
and judging whether the second target image and the first target image on the third image stored in the write-back memory are the same or not.
As an implementation of the first aspect, the first target image is stored in a first storage space;
the determining whether the second target image and the first target image on the third image stored in the write-back memory are the same includes:
acquiring pixel data of a first pixel set from the third image stored in the write-back memory, wherein the first pixel set is a set formed by at least one pixel point in a first area on the third image;
storing pixel data of the first set of pixels in a second storage space;
acquiring pixel data of a second pixel set from the pixel data stored in the first storage space, wherein the storage position of the pixel data of the second pixel set in the first storage space is the same as the storage position of the pixel data of the first pixel set in the second storage space;
judging whether the pixel data of the first pixel set and the pixel data of the second pixel set are the same or not;
and if the pixel data of the first pixel set is different from the pixel data of the second pixel set, determining that the second target image is different from the first target image.
As an implementation manner of the first aspect, if the pixel data of the first pixel set is different from the pixel data of the second pixel set, the method further includes:
acquiring pixel data of a third pixel set from the third image stored in the write-back memory, wherein the third pixel set is a set formed by at least one pixel point in a first region on the third image, and the pixel points in the third pixel set and the pixel points in the first pixel set are two adjacent lines of pixel points in the first region;
storing pixel data of the third set of pixels in the second storage space.
As an implementation manner of the first aspect, after storing the pixel data of the third set of pixels in the second storage space, the method further includes:
and storing the pixel data of each pixel point in the first area on the third image in the second storage space.
As an implementation manner of the first aspect, after determining whether the pixel data of the first pixel set and the pixel data of the second pixel set are the same, the method further includes:
if the pixel data of the first pixel set is the same as the pixel data of the second pixel set, acquiring the pixel data of a third pixel set from the third image stored in the write-back memory, wherein the third pixel set is a set formed by at least one pixel point in a first area on the third image, and the pixel points in the third pixel set and the pixel points in the first pixel set are two adjacent rows of pixel points in the first area;
storing pixel data of the third set of pixels in the second storage space;
acquiring pixel data of a fourth pixel set from the pixel data stored in the first storage space, wherein the storage position of the pixel data of the fourth pixel set in the first storage space is the same as the storage position of the pixel data of the third pixel set in the second storage space;
judging whether the pixel data of the third pixel set and the pixel data of the fourth pixel set are the same or not;
and if the pixel data of the third pixel set is the same as the pixel data of the fourth pixel set and the pixel data of the third pixel set is the pixel data corresponding to the pixel point in the last row of the first area, determining that the second target image is the same as the first target image.
As an implementation of the first aspect, after receiving the second image, the method further comprises:
receiving a fourth image;
judging whether a third target image on the fourth image is the same as the second target image, wherein the third target image is an image in the first area;
if the third target image is different from the second target image, calculating to obtain a third image noise based on the brightness value and the third target image;
and if the third target image is the same as the second target image, stopping calculating and obtaining the third image noise based on the brightness value and the third target image.
As an implementation manner of the first aspect, the determining whether the third target image and the second target image on the fourth image are the same includes:
storing a fifth image comprising a third target image on the fourth image in the write-back memory;
acquiring pixel data of a fifth pixel set from the fifth image, wherein the fifth pixel set is a set formed by at least one pixel point in a first region on the fifth image;
storing pixel data of the fifth set of pixels in the first storage space;
acquiring pixel data of a sixth pixel set from the pixel data stored in the second storage space, wherein the storage position of the pixel data of the sixth pixel set in the second storage space is the same as the storage position of the pixel data of the fifth pixel set in the first storage space;
judging whether the pixel data of the fifth pixel set is the same as the pixel data of the sixth pixel set;
and if the pixel data of the fifth pixel set is different from the pixel data of the sixth pixel set, determining that the third target image is different from the second target image.
As an implementation manner of the first aspect, the determining whether the pixel data of the first pixel set and the pixel data of the second pixel set are the same includes:
calculating a first cyclic redundancy check value of pixel data of the first set of pixels and a second cyclic redundancy check value of pixel data of the second set of pixels;
judging whether the first cyclic redundancy check value and the second cyclic redundancy check value are equal or not;
if the first cyclic redundancy check value and the second cyclic redundancy check value are equal, the pixel data of the first pixel set is the same as the pixel data of the second pixel set;
and if the first cyclic redundancy check value is not equal to the second cyclic redundancy check value, the pixel data of the first pixel set is different from the pixel data of the second pixel set.
As an implementation manner of the first aspect, the first area is an area on a display screen of the electronic device, which is located above an ambient light sensor of the electronic device.
As an implementation manner of the first aspect, the electronic device includes: a first processor, the method comprising:
in response to monitoring that the brightness of the display screen of the electronic device changes, the first processor acquires the brightness value of the display screen through the HWC module of the electronic device;
the first processor receiving, by the HWC module, a first image;
the first processor acquires a first target image from the first image through the HWC module, wherein the first target image is an image in a first area;
the first processor sending, by the HWC module, the first target image to a noise algorithm library of the electronic device;
the first processor obtains first image noise through calculation of the noise algorithm library based on the brightness value and the first target image;
the first processor receiving a second image through the HWC module;
the first processor determines, by the HWC module, whether a second target image on the second image is the same as the first target image, the second target image being an image within the first region;
if the second target image is the same as the first target image, the first processor stopping sending the second target image to the noise algorithm library through the HWC module;
if the second target image is not the same as the first target image, the first processor sending the second target image to the noise algorithm library through the HWC module;
the first processor obtains second image noise through calculation of the noise algorithm library based on the brightness value and the second target image.
As an implementation of the first aspect, after the first processor receives the second image through the HWC module, the method further includes:
the first processor sending the second image to a display subsystem of the electronic device through the HWC module;
the first processor stores a third image comprising a second target image on the second image in a write-back memory of the electronic equipment through the display subsystem;
accordingly, the first processor determines, by the HWC module, whether a second target image on the second image is the same as the first target image;
the first processor determines whether the second target image and the first target image on the third image stored in the write-back memory are the same through the HWC module.
In a second aspect, an electronic device is provided, comprising:
the brightness value obtaining unit is used for responding to the monitored change of the brightness of the display screen of the electronic equipment and obtaining the brightness value of the display screen;
an image obtaining unit for receiving a first image;
the matting unit is used for acquiring a first target image from the first image, wherein the first target image is an image in a first area;
a noise calculation unit for calculating and obtaining a first image noise based on the brightness value and the first target image;
an image obtaining unit, further configured to receive a second image;
a judging unit, configured to judge whether a second target image on the second image is the same as the first target image, where the second target image is an image in the first region;
and the noise calculation unit is used for calculating and obtaining second image noise based on the brightness value and the second target image if the second target image is different from the first target image.
In a third aspect, an electronic device is provided, which includes a first processor configured to execute a computer program stored in a memory, and implement the method of any one of the first aspect of the present application.
In a fourth aspect, a chip system is provided, which includes a processor coupled to a memory, and the processor executes a computer program stored in the memory to implement the method of any one of the first aspect of the present application.
In a fifth aspect, there is provided a computer readable storage medium storing a computer program which, when executed by one or more processors, performs the method of any one of the first aspects of the present application.
In a sixth aspect, embodiments of the present application provide a computer program product, which, when run on an apparatus, causes the apparatus to perform the method of any one of the first aspects of the present application.
It is understood that the beneficial effects of the second to sixth aspects can be seen from the description of the first aspect, and are not described herein again.
Drawings
Fig. 1 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a diagram illustrating a positional relationship between an ambient light sensor and a display screen in an electronic device according to an embodiment of the present disclosure;
FIG. 3 is a diagram illustrating another positional relationship between an ambient light sensor and a display screen in an electronic device according to an embodiment of the present application;
fig. 4 is a diagram illustrating another positional relationship between an ambient light sensor and a display screen in an electronic device according to an embodiment of the present disclosure;
FIG. 5 is a diagram illustrating a positional relationship of a target area on a display screen according to an embodiment of the present disclosure;
FIG. 6 is a diagram illustrating a positional relationship between an ambient light sensor and a target area on a display screen according to an embodiment of the present disclosure;
FIG. 7 is a diagram of a technical architecture on which the method for detecting ambient light provided by embodiments of the present application relies;
fig. 8 is a schematic diagram of an acquisition cycle of the ambient light sensor for acquiring ambient light according to an embodiment of the present application;
FIG. 9 is a schematic diagram of time nodes for image refresh and backlight adjustment during an acquisition cycle in the embodiment shown in FIG. 8;
FIG. 10 is a timing flow diagram of a method for detecting ambient light based on the technical architecture shown in FIG. 7;
fig. 11 is a timing flow chart of various modules in the AP processor provided by the embodiment of the present application in the embodiment shown in fig. 10;
FIG. 12 is a diagram of another technical architecture upon which the method for detecting ambient light provided by embodiments of the present application relies;
FIG. 13 is another timing flow diagram of a method for detecting ambient light based on the technical architecture shown in FIG. 12;
FIG. 14 is a schematic diagram of calculating integral noise based on image noise and backlight noise at each time node provided by the embodiment shown in FIG. 9;
fig. 15 is a schematic diagram of each time node for performing image refreshing and backlight adjustment in an upward direction of a time axis in an acquisition period according to the embodiment of the present application;
FIG. 16 is a schematic diagram illustrating the calculation of integral noise based on the image noise and backlight noise at each time node provided by the embodiment shown in FIG. 15;
fig. 17 is a diagram of a positional relationship between a target image and an area image in an electronic device according to an embodiment of the present application;
FIG. 18 is a diagram of a display interface of an electronic device in an application scenario provided by an embodiment of the present application;
FIG. 19 is a diagram of another display interface of an electronic device in an application scenario provided by an embodiment of the present application;
fig. 20 is a schematic diagram of a process of storing a target image in a buffer according to an embodiment of the present application;
fig. 21 is a schematic diagram illustrating a process of acquiring data of a sub-pixel from a CWB memory according to an embodiment of the present disclosure;
FIG. 22 is a schematic diagram illustrating a position of a target image in an entire frame of image according to an embodiment of the present application;
fig. 23 is a schematic process diagram for storing data of each row of sub-pixels in a target image according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that in the embodiments of the present application, "one or more" means one, two, or more than two; "and/or" describes the association relationship of the associated objects, indicating that three relationships may exist; for example, a and/or B, may represent: a alone, both A and B, and B alone, where A, B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," "fourth," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The noise monitoring method provided by the embodiment of the application can be suitable for electronic equipment with an OLED screen. The electronic device may be a tablet computer, a wearable device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), or other electronic devices. The embodiment of the present application does not limit the specific type of the electronic device.
Fig. 1 shows a schematic structural diagram of an electronic device. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a touch sensor 180K, an ambient light sensor 180L, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. Wherein, the different processing units may be independent devices or may be integrated in one or more processors. For example, the processor 110 is configured to execute the noise monitoring method in the embodiment of the present application.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area can store an operating system and an application program required by at least one function. The storage data area may store data created during use of the electronic device 100.
In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc.
In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave.
In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, among others. GNSS may include Global Positioning System (GPS), global navigation satellite system (GLONASS), beidou satellite navigation system (BDS), quasi-zenith satellite system (QZSS), and/or Satellite Based Augmentation System (SBAS).
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio signals into analog audio signals for output and also to convert analog audio inputs into digital audio signals. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to implement noise reduction functions in addition to listening to voice information. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and perform directional recording. For example, the microphone 170C may be used to collect voice information related to embodiments of the present application.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A.
In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may employ an organic light-emitting diode (OLED). In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement a photographing function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, and the application processor, etc.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The embodiment of the present application does not particularly limit the specific structure of the main executing body of the noise monitoring method, as long as the program recorded with the noise monitoring method of the embodiment of the present application can be run to perform the processing according to the noise monitoring method provided by the embodiment of the present application. For example, an execution subject of the noise monitoring method provided by the embodiment of the present application may be a functional module capable of calling a program and executing the program in the electronic device, or a communication device, such as a chip, applied to the electronic device.
Fig. 2 is a front position relationship diagram of a display screen and an ambient light sensor in an electronic device according to an embodiment of the present application.
As shown in fig. 2, the projection of the ambient light sensor on the display screen of the electronic device is located at the upper half of the display screen of the electronic device. When a user holds the electronic equipment, the ambient light sensor positioned on the upper half part of the electronic equipment can detect the light intensity and the color temperature of the environment on the front side (the orientation of the display screen in the electronic equipment) of the electronic equipment, and the light intensity and the color temperature are used for adjusting the brightness and the color temperature of the display screen of the electronic equipment, so that a better visual effect can be achieved. For example, the display screen may not be too bright in dark environments to cause glare, and may not be too dark in bright environments to cause poor visibility.
Fig. 3 is a side view of the display screen and the ambient light sensor in the electronic device. The display screen of the electronic equipment comprises from top to bottom: glass apron (printing opacity), display module assembly and protection pad pasting, wherein, all are used for showing the azimuth relation when electronic equipment's display screen upwards places here under and. Because the ambient light sensor need gather the ambient light of the top of electronic equipment's display screen, consequently, can dig some with the display module assembly in the display screen, ambient light sensor is placed to this part, is equivalent to ambient light sensor and places the below of the glass apron in the display screen in, and the display module assembly is located the same layer. Note that the detection direction of the ambient light sensor coincides with the orientation of the display screen in the electronic device (the orientation of the display screen in the electronic device is upward in fig. 3). Obviously, this arrangement of the ambient light sensor sacrifices a portion of the display area. When a high screen occupation ratio is pursued, the arrangement mode of the ambient light sensor is not applicable.
Fig. 4 shows another arrangement of the ambient light sensor provided in the embodiments of the present application. And transferring the ambient light sensor from the lower part of the glass cover plate to the lower part of the display module. For example, the ambient light sensor is located below an Active Area (AA) area in the OLED display module, and the AA area is an area in the display module where image content can be displayed. This arrangement of the ambient light sensor does not sacrifice the display area. However, the OLED screen is a self-luminous display screen, when the OLED screen displays an image, a user can see the image from above the display screen, and similarly, the ambient light sensor located below the OLED screen can also collect light corresponding to the image displayed on the OLED screen. Therefore, the ambient light collected by the ambient light sensor includes light emitted by the display screen and actual ambient light from the outside. If the external real ambient light is to be accurately obtained, the light emitted by the display screen needs to be obtained in addition to the ambient light collected by the ambient light sensor.
As can be understood from fig. 4, since the ambient light sensor is located below the AA area, the AA area in the display module is not sacrificed by the arrangement of the ambient light sensor. Therefore, the projection of the ambient light sensor on the display screen can be located in any area of the front of the display screen, and is not limited to the following arrangement: the projection of the ambient light sensor on the display screen is located at the top of the front of the display screen.
Regardless of which region of the display screen is located below the AA region, the projected area of the ambient light sensor on the display screen is much smaller than the area of the display screen itself. Therefore, not the entire display screen may emit light that interferes with the ambient light collected by the ambient light sensor. But the light emitted from the display area above the ambient light sensor in the display screen and the light emitted from the display area above a certain range around the ambient light sensor will interfere with the ambient light collected by the ambient light sensor.
As an example, the photosensitive area of the ambient light sensor has a photosensitive angle, and the ambient light sensor may receive light within the photosensitive angle but not light outside the photosensitive angle. In fig. 5, light emitted from point a above the ambient light sensor (within the sensing angle) and light emitted from point B above a certain range around the ambient light sensor (within the sensing angle) both interfere with the ambient light collected by the ambient light sensor. While the light emitted from point C, which is farther away from the ambient light sensor in fig. 5 (outside the light sensing angle) does not interfere with the ambient light collected by the ambient light sensor. For convenience of description, a display area of the display screen that interferes with the ambient light collected by the ambient light sensor may be referred to as a target area. The location of the target area in the display screen is determined by the specific location of the ambient light sensor under the AA area. As an example, the target area may be a square area with a side length of a certain length (e.g., 80 microns, 90 microns, 100 microns) centered at a center point of the ambient light sensor. Of course, the target area may also be an area of other shape obtained by measurement that interferes with the light collected by the ambient light sensor.
As another example, fig. 6 is a schematic front view of an OLED screen of an electronic device provided in an embodiment of the present application, and as shown in fig. 6, the electronic device includes a housing, an OLED screen of the electronic device displays an interface, a corresponding area of the display interface in the display screen is an AA area, and an ambient light sensor is located behind the AA area. The center point of the target area coincides with the center point of the ambient light sensor.
It should be noted that, the ambient light sensor is a single device, and the manufacturer may be different, and the shape of the external appearance may also be different. The central point of the ambient light sensor in the embodiment of the present application is the central point of the photosensitive area where the ambient light sensor collects ambient light. In addition, the target area shown in fig. 6 is larger than the projection area of the ambient light sensor on the OLED screen. In practical application. The target area may also be less than or equal to the projection area of the ambient light sensor on the OLED screen. However, the target area is typically larger than the photosensitive area of the ambient light sensor. As mentioned above, the real ambient light from the outside is equal to the ambient light collected by the ambient light sensor minus the light emitted by the display screen. While the light emitted by the display screen has been targetedLight emitted from the region. The emitted light of the target area is light generated by the display content of the target area. And the interference of the display content to the ambient light collected by the ambient light sensor comes from two parts: RGB pixel information of the display image and luminance of the display image. As can be understood from the above analysis, the interference to the ambient light collected by the ambient light sensor is: RGB pixel information of an image displayed by the target area and luminance information of the target area. As an example, if the pixel value of a pixel is (r, g, b) and the luminance is L, the normalized luminance of the pixel is: l x (r/255) 2.2 ,L×(g/255) 2.2 ,L×(b/255) 2.2
For convenience of description, an image corresponding to the target area may be referred to as a target image, and interference of RGB pixel information of the target image and luminance information on ambient light collected by the ambient light sensor may be referred to as fusion noise. The ambient light collected by the ambient light sensor can be recorded as initial ambient light, and the external real ambient light can be recorded as target ambient light.
From the above description it can be derived: the target ambient light is equal to the initial ambient light minus the fusion noise at each instant in the time period in which the initial ambient light was collected. In the embodiment of the present application, a process of calculating the fusion noise together according to the RGB pixel information and the luminance information is referred to as a noise fusion process.
When the display screen is in a display state, the RGB pixel information of the image displayed in the target area may change, and the brightness information of the displayed image may also change. The fusion noise may be changed whether the RGB pixel information of the image displayed in the target area is changed or the luminance information of the displayed image is changed. Therefore, it is necessary to calculate the fusion noise thereafter from the changed information (RGB pixel information or luminance information). If the image of the target area is unchanged for a long time, the fusion noise is calculated only when the brightness of the display screen is changed. Therefore, in order to reduce the frequency of calculating the fusion noise, the target region may be a region in which the frequency of change of the image displayed on the display screen is low. For example, a status bar area at the top of the front of the electronic device. The projection of the ambient light sensor on the display screen is located to the right in the status bar area of the display screen. Of course, the position of the ambient light sensor may also be a position to the left in the status bar area, or a position in the middle in the status bar area, and the embodiment of the present application is not limited to a specific position of the ambient light sensor.
A technical architecture corresponding to the method for obtaining the target ambient light through the initial ambient light and the content displayed on the display screen according to the embodiment of the present application will be described below with reference to fig. 7.
As shown in fig. 7, the processor in the electronic device is a multi-core processor, which at least includes: an AP (application processor) processor and an SCP (sensor co-processor) processor. The AP processor is an application processor in the electronic device, and an operating system, a user interface and an application program are all run on the AP processor. The SCP processor is a co-processor that may assist the AP processor in performing events related to images, sensors (e.g., ambient light sensors), and the like.
Only the AP processor and SCP processor are shown in fig. 7. In practical applications, the multi-core processor may also include other processors. For example, when the electronic device is a mobile phone, the multi-core processor may further include a Baseband (BP) processor that runs mobile phone radio frequency communication control software and is responsible for sending and receiving data.
The AP processor in fig. 7 only shows the content related to the embodiment of the present application, and the implementation of the embodiment of the present application needs to rely on: an Application Layer (Application), a Java Framework Layer (Framework Java), a native Framework Layer (Framework native), a Hardware Abstraction Layer (Hardware Abstraction Layer, HAL), a kernel Layer (kernel), and a Hardware Layer (Hardware).
The SCP processor in fig. 7 may be understood as a sensor control center (sensor hub) which can control the sensors and process data related to the sensors. The implementation of the embodiment of the present application needs to rely on: a cooperative application layer (Hub APK), a cooperative framework layer (Hub FWK), a cooperative driving layer (Hub DRV) and a cooperative hardware layer (Hub hardware).
Various applications exist in the application layer of the AP processor, and application a and application B are shown in fig. 7. Taking application a as an example, after the user starts application a, the display screen will display the interface of application a. Specifically, the application a sends the display parameters (for example, the memory address, the color, and the like of the interface to be displayed) of the interface to be displayed by the application a to the display engine service.
And the display engine service in the AP processor sends the received display parameters of the interface to be displayed to a surfaceFlinger of a Framework layer (Framework native) of the AP processor.
The surface Flinger in the native Framework layer (Framework native) of the AP processor is responsible for the fusion of the control interface (surface). As an example, an overlap region of at least two interfaces that overlap is calculated. The interface here may be an interface presented by a status bar, a system bar, the application itself (interface to be displayed by application a), wallpaper, background, etc. Therefore, the surfaceflag can obtain not only the display parameters of the interface to be displayed by the application a, but also the display parameters of other interfaces.
The hardware abstraction layer of the AP processor is provided with HWC (hardware component HAL), and the HWC is a module for synthesizing and displaying an interface in the system and provides hardware support for a surfaflinger service. Step a1 is to send the display parameters (e.g., memory address, color, etc.) of each interface to the HWC through the interface (e.g., setLayerBuffer, setLayerColor, etc.) for interface fusion by the surfefinger.
Generally, in image synthesis (e.g., when an electronic device displays an image, the status bar, the system bar, the application itself, and the wallpaper background need to be synthesized), the HWC obtains a synthesized image according to display parameters of each interface through hardware (e.g., a hardware synthesizer) underlying the HWC. The HWC in the hardware abstraction layer of the AP processor sends the underlying hardware-synthesized image to the OLED driver, see step a 2.
The OLED drive of the kernel layer of the AP processor gives the synthesized image to the display subsystem (DSS) of the hardware layer of the AP processor, see step A3. The display subsystem (DSS) in the hardware layer of the AP processor may perform secondary processing (e.g., HDR10 processing for enhancing image quality) on the combined image, and may display the secondary processed image after the secondary processing. In practical application, the secondary treatment may not be performed. Taking the example of not performing the secondary processing, the display subsystem of the AP processor hardware layer sends the synthesized image to the OLED screen for display.
If the starting of the application a is taken as an example, the synthesized image displayed on the OLED screen is an interface synthesized by the interface to be displayed by the application a and the interface corresponding to the status bar.
The OLED screen can complete image refreshing and displaying once according to the mode.
In the embodiment of the present application, before the image after the secondary processing (or the synthesized image) is sent to be displayed, the display subsystem (DSS) may be controlled to store the whole frame of image (which may also be an image of the whole frame of image larger than the target area, or may also be an image corresponding to the target area in the whole frame of image) in the memory of the kernel layer of the AP processor, and since the process belongs to Concurrent Write-Back image frame data, the memory may be recorded as a Write-Back (CWB) memory, see step a 4.
In the embodiment of the present application, for example, the display subsystem stores the whole frame image in the CWB memory of the AP processor, and after the display subsystem successfully stores the whole frame image in the CWB memory, the display subsystem may send a signal indicating that the storage is successful to the HWC. The whole frame image corresponding to the image stored in the CWB memory by the display subsystem may be recorded as an image to be refreshed (the image to be refreshed may also be understood as an image after the current refresh).
The AP processor may also be configured to allow the HWC to access the CWB memory. The HWC may obtain the target image from the CWB memory after receiving a store successful signal sent by the display subsystem, see step a 5.
It should be noted that, regardless of whether the image of the whole frame image or the image of the partial region in the whole frame image is stored in the CWB memory, the HWC can obtain the target image from the CWB memory. The process of the HWC obtaining the target image from the CWB memory may be denoted as HWC matting from the CWB memory.
For convenience and description, the images stored by the display subsystem in the CWB memory may also be referred to as region images. As described above, the region image may be the entire frame image, may be the target image, or may be an image between the range of the target image and the range of the region image. The range of the target image can be understood as the length and width limited range size of the target image, and the range of the whole frame image can also be the length and width limited range size.
As an example, the size of the whole frame image is X1 (pixels) × Y1 (pixels), the size of the target image is X2 (pixels) × Y2 (pixels), and the size of the region image is X3 (pixels) × Y3 (pixels). X3 satisfies X1 ≥ X3 ≥ X2, and Y3 satisfies Y1 ≥ Y3 ≥ Y2.
Of course, when X3 is X1 and Y3 is Y1, the region image is an entire frame image. When X3 is X2 and Y3 is Y2, the region image is the target image.
Continuing to take application a as an example, when application a has a brightness adjustment requirement due to switching of the interface, application a sends the brightness to be adjusted to the display engine service.
And the display engine service in the AP processor sends the brightness to be adjusted to the kernel node in the kernel layer of the AP processor so as to enable related hardware to adjust the brightness of the OLED screen according to the brightness to be adjusted stored in the kernel node.
According to the mode, the OLED screen can complete brightness adjustment once.
In the embodiment of the present application, the HWC may be further configured to obtain brightness to be adjusted from the kernel node, and the brightness to be adjusted may also be recorded as brightness after this adjustment, which is specifically referred to in step a 5'.
In a specific implementation, the HWC may monitor whether data stored in the kernel node changes based on a uevent mechanism, and obtain currently stored data, that is, a brightness value to be adjusted (the brightness value to be adjusted is used to adjust the brightness of the display screen, and therefore, may also be recorded as the brightness value of the display screen) from the kernel node after monitoring that the data in the kernel node changes. After obtaining the target image or the brightness information to be adjusted, the HWC may send the target image or the brightness information to be adjusted to a noise algorithm library of a hardware abstraction layer of the AP processor. See step a 6. The noise algorithm library can calculate and obtain the fusion noise at the refreshing time of the target image after the target image is obtained every time. After each brightness is obtained, the fusion noise at the brightness adjusting moment is calculated and obtained. And the noise algorithm library stores the fusion noise obtained by calculation in a noise memory of the noise algorithm library.
In practical applications, after the HWC obtains the target image, the HWC may store the target image, and the HWC may send the storage address of the target image to the noise algorithm library, and the noise algorithm library may buffer the target image of a frame at the latest time in an address-recording manner. After the HWC obtains the brightness to be adjusted, the HWC may send the brightness to be adjusted to a noise algorithm library, which may buffer a brightness at the latest moment. For convenience of description, the subsequent embodiments of the present application are described in terms of the HWC sending the target image to the noise algorithm library, and in practical applications, the HWC may obtain the target image and then store the target image, and send the storage address of the target image to the noise algorithm library.
As an example, after receiving the storage address of the first frame target image, the noise algorithm library buffers the storage address of the first frame target image. And each time a new storage address of the target image is received, the new storage address of the target image is used as the latest storage address of the cached target image. Correspondingly, the noise algorithm library buffers the first brightness after receiving the first brightness, and the new brightness is taken as the latest brightness buffered every time a new brightness is received. In the embodiment of the application, the noise algorithm library caches the acquired target image and brightness value in the data storage library. The target image and the luminance value stored in the data store may be recorded as screen data, i.e. the screen data stored in the data store includes: a target image and a luminance value.
In addition, in order to describe the transfer relationship between parameters such as the target image and the brightness to be adjusted, the embodiment of the present application takes the example that the HWC sends the parameters such as the target image and the brightness to be adjusted to the noise algorithm library. In practice, the relationship between the HWC and the noise algorithm library calls the noise algorithm library for the HWC. When the HWC calls the noise algorithm library, the HWC inputs parameters such as a target image (a memory address of the target image), brightness to be adjusted, and the like as arguments of a calculation model in the noise algorithm library to the noise algorithm library. Other parameters will not be exemplified.
Because brightness adjustment and image refreshing are two completely independent processes, an image may be refreshed at a certain time, and brightness remains unchanged, when fusion noise at the time is calculated, a target image corresponding to the refreshed image and current brightness (a brightness value stored latest before the time indicated by the timestamp of the target image in brightness values stored in the noise algorithm library) are adopted. For convenience of description, the fusion noise at the image refresh timing calculated due to the image refresh may be written as the image noise at the image refresh timing. Similarly, if the image is not refreshed at a certain time and the brightness is adjusted, the adjusted brightness and the current target image (the target image stored in the noise algorithm library and newly stored before the time indicated by the timestamp of the brightness value) are used for calculating the fusion noise at the certain time. For convenience of description, the fusion noise at the luminance adjustment timing calculated due to the luminance adjustment may be regarded as the backlight noise at the luminance adjustment timing.
The target image and the brightness sent by the HWC to the noise algorithm library are both time-stamped, and correspondingly, the image noise and the backlight noise obtained by the computation of the noise algorithm library are also both time-stamped. The timestamp of the image noise is the same as the timestamp of the target image, and the timestamp of the backlight noise is the same as the timestamp of the brightness to be adjusted. The timestamp of the image noise should be the image refresh moment in the strict sense. In practical applications, another time node close to the image refresh time may be used as the image refresh time, for example, the start time (or the end time, or any time between the start time and the end time) of the HWC performing matting to obtain the target image from the CWB memory may be used as the image refresh time. The time stamp of the backlight noise should be strictly speaking the backlight adjustment instant. In practical applications, another time node close to the backlight adjustment time may also be used as the backlight adjustment time, for example, the start time (or the end time, or any time between the start time and the end time) when the HWC executes to obtain the brightness to be adjusted from the kernel node is used as the brightness adjustment time. The timestamp of the image noise and the timestamp of the backlight noise facilitate denoising of the initial ambient light collected by the subsequent ambient light sensor and the ambient light sensor over a time span to obtain the target ambient light. The noise algorithm library stores image noise and backlight noise in a noise memory, stores a timestamp of the image noise when the noise algorithm library stores the image noise, and stores a timestamp of the backlight noise when the noise algorithm library stores the backlight noise.
An Ambient Light Sensor (ALS) in the co-hardware layer of the SCP processor collects initial ambient light at a certain collection period after start-up (typically, after the electronic device is powered on, the ambient light sensor is started up). The ambient light sensor of the SCP processor transmits the initial ambient light information to the ambient light sensor driver (ALS DRV) of the co-driver layer (Hub DRV) layer of the SCP processor, see step E2.
The initial ambient light information transmitted by the SCP processor to the AP processor includes a first value, a first time and a second time, where the first value can be understood as a raw value of the initial ambient light, the first time is an integration start time at which the ambient light sensor acquires the first value, and the second time is an integration end time at which the ambient light sensor acquires the first value.
And in a cooperative driving (Hub DRV) layer of an SCP processor, an ambient light sensor driving (ALS DRV) carries out preprocessing on initial ambient light information to obtain raw values on four channels of the RGBC. The co-driver layer of the SCP processor transmits raw values on the RGBC four channels to the ambient light sensor application of the co-application layer of the SCP processor, see step E3.
The ambient light sensor of the co-application layer of the SCP processor sends raw values on the RGBC four channels and other relevant data (e.g. start time and end time of each time the ambient light sensor collects initial ambient light) to the HWC of the AP processor via a first inter-core communication (communication between the ambient light sensor application of the SCP processor and the HWC of the AP processor), see step E4.
After the HWC in the AP processor obtains the initial ambient light data reported by the SCP processor, the HWC in the AP processor may send the initial ambient light data to the noise algorithm library. See step a 6.
As described above, the noise algorithm library may calculate the image noise at the image refresh timing and the backlight noise at the luminance adjustment timing, and store the calculated image noise and backlight noise in the noise memory in the noise algorithm library. In practical application, the noise algorithm library can calculate and obtain image noise at the image refreshing time and backlight noise at the brightness adjusting time. The integral noise between the acquisition start time and the acquisition end time of the initial ambient light may also be obtained from the image noise and the backlight noise stored in the noise memory after the acquisition start time and the acquisition end time of the initial ambient light are obtained. And the noise algorithm library deducts integral noise between the acquisition starting time and the acquisition ending time of the initial environment light from the initial environment light to obtain the target environment light.
As can be understood from the above description of the noise algorithm library, the noise calculation library includes a plurality of calculation models, for example, a first algorithm model, for obtaining the fusion noise according to the target image and the luminance calculation. And the second algorithm model is used for obtaining integral noise between the acquisition starting time and the acquisition ending time of the initial environment light according to the fusion noise at each moment. And the third algorithm model is used for obtaining the target ambient light according to the initial ambient light and the integral noise. In practical applications, the noise algorithm library may further include other calculation models, for example, in a process of obtaining the target ambient light based on the target image, the brightness, and the initial ambient light, if the raw values on the four channels of the initial ambient light are filtered, there is a model for filtering the raw values on the four channels of the initial ambient light, which is not illustrated in the embodiment of the present application.
The inputs to the library of noise algorithms include: the target image and brightness acquired by the HWC at various times, and the initial ambient light correlation data acquired by the HWC from the SCP processor. The output of the noise algorithm library is: and the raw value of the target ambient light can be recorded as a second value. In the embodiment of the present application, the process of sending the target image, the brightness, and the initial ambient light from the HWC to the noise algorithm library is denoted as step a 6.
The noise calculation library also needs to return the target data to the HWC after obtaining the target ambient light, and this process is denoted as step a 7. In practical applications, the output of the noise algorithm library is raw values on four channels of the target ambient light.
The HWC in the AP processor sends the raw values on the four channels of the target ambient light returned by the noise algorithm library to the ambient light sensor application in the cooperative application layer of the SCP processor via first inter-core communication, see step A8.
After the ambient light sensor application of the co-driver layer of the SCP processor obtains the raw values on the target ambient light four channels, the raw values on the target ambient light four channels are stored in the ambient light memory of the co-driver layer. See step E5.
The co-driver layer of the SCP processor is provided with a calculation module that obtains from memory the raw values on the target ambient light four channels, see step E6. When the integration of each time is finished, the ambient light sensor generates an integration interrupt signal, the ambient light sensor sends the integration interrupt signal to the ambient light sensor driver, the ambient light sensor driver calls the calculation module, and the calculation module is triggered to obtain raw values on four channels of the target ambient light from the storage.
The ambient light sensor drive triggers the calculation module to acquire the raw value of the target ambient light after the integration is finished, so that the raw value of the target ambient light in the previous integration period is acquired at the moment.
Taking the embodiment shown in FIG. 8 as an example, at t 1 After the time integral is finished, the ambient light sensor obtains t 0 Time to t 1 Initial ambient light at time, SCP processor will t 0 Time to t 1 The initial environment light at the moment is sent to an AP processor, and the AP processor obtains t through calculation 0 Time to t 1 Raw value of the target ambient light at the time. AP processor will t 0 Time to t 1 The raw value of the target ambient light at the time is sent to the SCP processor. The SCP processor will store t 0 Time to t 1 The raw value of the target ambient light at the time is entered into the memory of the SCP processor.
At t 3 After the time integral is finished, the ambient light sensor obtains t 2 Time to t 3 Initial ambient light at time, SCP processor will t 2 Time to t 3 The initial ambient light at the time is sent to the AP processor. An integral interrupt signal is generated after the integration of the ambient light sensor is finished every time, the ambient light sensor sends the integral interrupt signal to the ambient light sensor drive, the ambient light sensor drive calls the calculation module, and the calculation module is triggered to obtain the currently stored t from the memory 0 Time to t 1 Raw value of the target ambient light at the time. Since this time is t 3 After the moment, the calculation module therefore at t 3 After the moment according to t obtained 0 Time to t 1 And calculating the raw value of the target ambient light at the moment to obtain the lux value of the target ambient light. That is, the SCP processor calculates the lux value of the target ambient light obtained in the T2 period as the lux value of the real ambient light in the T1 period.
As previously described, the ambient light sensor in the SCP processor will end up integrating (t) 3 Time) is followed by an integration interrupt signal (which gives the ambient light sensor drive) and at t 3 After the time, the initial ambient light of the period T2 is sent to the AP processor, the target ambient light is sent to the SCP processor after the AP processor calculates and obtains the target ambient light, and the SCP processor stores the target ambient light of the period T2 in a memory. If the SCP processor calculates the lux value using the raw value of the target ambient light for the period T2, it will start waiting until the AP processor transmits the target ambient light to the memory of the SCP processor, starting from the receipt of the integration interrupt signal from the ambient light sensor driver. The ambient light sensor driver in the SCP processor can invoke the calculation module to retrieve the raw value of the target ambient light from memory for the period T2. The waiting time includes at least: the process of transmitting the initial ambient light to the AP processor by the SCP processor, the process of calculating and obtaining the target ambient light by the AP processor based on the initial ambient light and other related data and the process of transmitting the target ambient light to a memory in the SCP processor by the AP processor are respectively corresponding to the time, and the time is relatively longer and is not fixed. Therefore, the ambient light sensor driver in the SCP processor can be set to call the computing module to fetch the previous cycle from the memory after receiving the integral interrupt signal of the second acquisition cycle deviceRaw value of the target ambient light, and thus the lux value is calculated from the raw value of the target ambient light of the previous cycle. The lux value of the target ambient light may be recorded as a third value, and the third value and the second value are the lux value and the raw value of the same target ambient light.
Taking the collection period shown in fig. 8 as an example, if the raw value of the initial ambient light collected in the collection period T1 is the first value. The raw value of the target ambient light corresponding to the acquisition period T1, which is obtained from the raw value of the initial ambient light acquired during the acquisition period T1, is the second value. The lux value of the target ambient light corresponding to the acquisition period T1, which is obtained from the raw value of the target ambient light corresponding to the acquisition period T1, is a third value. The raw value of the initial ambient light acquired during the acquisition period T2 may be recorded as a fourth value. The fourth value is the initial ambient light acquired in an acquisition period subsequent to the acquisition period corresponding to the first value (or the acquisition period corresponding to the second value, or the acquisition period corresponding to the third value).
And a calculation module in a co-drive layer of the SCP processor obtains the lux value of the target ambient light according to the raw values on the four channels of the target ambient light. The calculation module in the SCP processor sends the lux value of the target ambient light obtained by calculation to the ambient light sensor application of the co-application layer through the interface of the co-framework layer, see steps E7 and E8.
The ambient light sensor application in the SCP processor co-application layer transmits the lux value of the target ambient light to the light service (light service) in the native framework layer in the AP processor through the second inter-core communication (communication of the SCP processor to the light service of the AP processor), see step E9.
A light service (light service) may send the lux value of the target ambient light to the display engine service. The display engine service may send the lux value of the target ambient light to the upper layer to facilitate an application in the application layer to determine whether to adjust the brightness. The display engine service can also send the lux value of the target ambient light to the kernel node so as to enable related hardware to adjust the brightness of the display screen according to the lux value of the target ambient light stored by the kernel node.
After describing the technical architecture on which the method of obtaining the target ambient light depends, the process of obtaining the target ambient light from the target image, the brightness, and the initial ambient light collected by the ambient light sensor will be described from the perspective of the collection period of the ambient light sensor.
As can be understood from the above examples, the target image and the brightness to be adjusted are both obtained by the HWC, and therefore, there is a sequential order in the processes of obtaining the target image and obtaining the brightness to be adjusted by the HWC. After the HWC acquires the target image or the brightness to be adjusted, the target image or the brightness to be adjusted is sent to the noise algorithm library, and the process that the HWC sends the target image or the brightness to be adjusted to the noise algorithm library also has a sequence. Correspondingly, the time when the noise algorithm library receives the target image and the brightness to be adjusted also has a sequence. However, even if there is a chronological order in the time when the noise algorithm library receives the target image and the brightness to be adjusted, the timestamps of the target image and the brightness to be adjusted may be the same since the HWC may be within the same time metric level when acquiring the target image and the brightness to be adjusted. As an example, within the same millisecond (5 th millisecond), the HWC performs the acquisition of the brightness to be adjusted first and then performs the acquisition of the target image. Although there is a precedence in the execution of the HWC, the time stamps of the target image and the brightness to be adjusted are both 5 th msec.
Referring to fig. 8, the ambient light sensor collects ambient light at a time period T from which the ambient light sensor collects 0 To t 2 (acquisition period T1), from T 2 To t 4 (acquisition period T2), from T 4 To t 6 (acquisition period T3) is one acquisition period. During the acquisition period of T1, the time when the ambient light sensor actually performs acquisition is T 0 To t 1 From t 1 To t 2 The ambient light sensor may be in a sleep state for this period of time. The embodiment of the present application is described by taking as an example that the collection period of the ambient light is fixed (i.e., the values of T1, T2, and T3 are the same) and the duration of the integration period is fixed.
As an example, it may be at 350ms (t) 2 -t 0 ) As one acquisition cycle. The actual acquisition time of the ambient light sensor in one acquisition period is 50ms (t) 1 -t 0 ) Then in one acquisition cycleInternal, ambient light sensor will have 300ms (t) 2 -t 1 ) Is in a dormant state. The above examples of 350ms, 50ms and 300ms are for example only and not intended to be limiting.
For ease of description, the time period (e.g., t) for which the ambient light sensor actually collects may be 0 To t 1 ) Noted as an integration period, a period of time (e.g., t) during which the environmental sensor does not initiate acquisition 1 To t 2 ) Denoted as the non-integration period.
The image displayed on the display screen of the electronic device is refreshed at a certain frequency. Taking 60Hz as an example, it is equivalent to refreshing the display screen of the electronic device 60 times per second, or refreshing the image every 16.7 ms. Image refresh occurs during the acquisition period of the ambient light sensor when the display screen of the electronic device displays images. When the image displayed on the display screen is refreshed, the AP processor performs steps a1 to a6 (transmission target image) in the technical architecture shown in fig. 7. HWC from t in AP processor 0 Starting at the moment, the CWB is controlled to write back all the time, i.e. the above steps are repeated as long as there is an image refresh.
Note that, in the present embodiment, a refresh rate of 60Hz is taken as an example. In practice, the refresh rate may be 120Hz or other refresh rates. In the embodiment of the present application, the step a1 to the step a6 (transmission target image) need to be repeatedly executed every refresh frame, and in practical applications, the step a1 to the step a6 (transmission target image) may also be repeatedly executed every other frame (or two frames, etc.).
The brightness adjustment does not have a fixed periodicity, so the brightness adjustment may also occur during the acquisition period of the ambient light sensor. When the brightness is adjusted, the HWC also performs steps a 5' to a6 (sending the brightness to be adjusted) in the technical architecture shown in fig. 7.
After each integration of the ambient light sensor (i.e. at t) 1 After that, t 3 After that, t 5 … …) reporting the data of the initial environment light collected by the current integration process by the SCP processor (for example, raw value on four channels of the initial environment light and integration of the current integration processStart time and integration end time), the HWC of the AP processor sends the data related to the initial ambient light to the noise algorithm library, and the target ambient light is obtained through calculation by the noise algorithm library.
Referring to FIG. 9, taking an acquisition cycle as an example, at t 01 Time (sum t) 0 The same time), t 03 Time t 04 Time and t 11 The moments are all image refreshing moments at t 02 Time t and 12 the time is the brightness adjustment time. Thus, the AP processor can compute t in real time 01 Image noise at time t 02 Backlight noise at time t 03 Image noise at time t 04 Image noise at time t 11 Image noise sum of time of day t 12 Backlight noise at the moment. At the end of this integration (t) 1 Time of day), the noise memory of the AP processor stores: t is t 01 Image noise at time, t 02 Backlight noise at time t 03 Image noise and t at time 04 Image noise at time instants.
At the end of this integration (t) 1 Time), the ambient light sensor obtains the initial ambient light of the current integration and the current integration time period. The SCP processor reports the data of the initial environment light to the AP processor, and a noise calculation module in the AP processor obtains t from a noise memory according to the starting time and the ending time of the current integration time period 01 Image noise at time t 02 Backlight noise at time, t 03 Image noise at time t 04 Image noise at time instants. And the noise calculation library calculates and obtains target environment light according to the initial environment light collected in the integral time period and the image noise and backlight noise influencing the integral time period.
During a non-integration period (t) 1 To t 2 ) Since the HWC always controls the CWB write back, therefore, the HWC is on t 11 The refreshed image at the moment is also subjected to matting to obtain a target image, and the noise algorithm library also calculates t 11 Image noise at time instants. Non-integral time period t 12 The brightness changes at the moment, and the noise algorithm base also calculates t 12 Backlight noise at the moment. However, when the target ambient light is obtained by calculation, the required fusion noise is a fusion noise that interferes with the initial ambient light obtained in the current integration period, and therefore, t is not required 11 Image noise and t at time 12 The backlight noise at the moment can also obtain the target ambient light of the current integration time period. In practical application, the noise algorithm library computer obtains t 11 Image noise sum of time of day t 12 After the backlight noise at the moment, t also needs to be set 11 Image noise and t at time 12 The backlight noise at the moment is stored in a noise memory.
The above example describes the process of acquiring the target ambient light from the perspective of the technical architecture based on fig. 7 and from the perspective of the acquisition period of the ambient light sensor based on fig. 9, respectively. A time sequence process diagram for acquiring the target ambient light provided by the embodiment shown in fig. 10 will be described below with reference to the technical architecture shown in fig. 7 and one acquisition cycle of the ambient light sensor shown in fig. 9.
As can be understood from the above description, the process of triggering the AP processor to calculate the image noise by image refresh, the process of triggering the AP processor to calculate the backlight noise by brightness adjustment, and the process of controlling the underlying hardware ambient light sensor to collect the initial ambient light by the SCP processor are performed independently, and there is no chronological order. And the noise calculation library of the AP processor processes the target image, the brightness and the initial ambient light obtained in the three independent processes to obtain the target ambient light.
The same reference numbers for steps in the embodiment of fig. 10 and steps in the technical architecture of fig. 7 indicate that the same steps are performed. In order to avoid repetitive description, the contents detailed in the embodiment shown in fig. 7 will be briefly described in the embodiment shown in fig. 10.
From t, in connection with FIG. 9 0 The moment begins and the image is refreshed. At the same time, the ambient light sensor enters an integration period and begins to collect initial ambient light.
Accordingly, in FIG. 10, step E1, the ambient light sensor in the co-hardware layer of the SCP processor enters an integration period from t 0 (t 01 ) Time onInitial ambient light is collected.
Step A1, image t 0 (t 01 ) And refreshing at the moment, and sending the display parameters of the interface to the HWC in the hardware abstraction layer of the AP processor by the SurfaceFlinger in the native framework layer of the AP processor. The HWC may send the received display parameters of each layer interface sent by the surfafinger to the hardware at the bottom of the HWC, and the hardware at the bottom of the HWC obtains the image synthesized by each layer interface according to the display parameters of each layer interface. The hardware underlying the HWC returns the synthesized image to the HWC.
In step A2, the HWC in the hardware abstraction layer of the AP processor sends the resultant image to the OLED drive in the kernel layer of the AP processor. Step a3, the OLED driver in the kernel layer of the AP processor sends the synthesized image to the display subsystem of the hardware layer of the AP processor.
Step a4, the display subsystem in the hardware layer of the AP processor stores the image before display in the CWB memory in the kernel layer of the AP processor.
In the embodiment of the present application, the HWC waits for a successful store signal from the display subsystem after sending the synthesized image to the OLED driver.
The display subsystem will send a signal to the HWC that the image was successfully stored in the CWB memory before being sent for display. After receiving the signal that the display subsystem is successfully stored, the HWC performs cutout operation on the image before display stored in the CWB memory in the kernel layer to obtain a target image.
In step a5, the HWC in the hardware abstraction layer of the AP processor abstracts the target image from the pre-rendering image stored in the CWB memory in the kernel layer.
Step A6, after the HWC in the hardware abstraction layer of the AP processor obtains the target image, the target image is sent to the noise algorithm library of the layer, and after the noise algorithm library receives the target image, t is calculated according to the target image and the cached current brightness information 01 Image noise at time instants. During the execution of steps a1 through a6, the ambient light sensor in the co-hardware layer of the SCP processor is always in the integration process for one acquisition period.
Combined drawingAt t, 9 02 At that moment, the ambient light sensor is still in the integration period, collecting the initial ambient light. At t 02 At that moment, the brightness of the display screen changes, triggering the execution of step B1 in fig. 10.
In fig. 10, step B1 (step a 5' in the architecture shown in fig. 7), the HWC of the hardware abstraction layer of the AP processor obtains t from a kernel node in the kernel layer of the AP processor 02 Luminance information of the time instant.
Step B2 (step A6), HWC of hardware abstraction layer of AP processor will t 02 The brightness information of the moment is sent to a noise algorithm library according to t 02 Calculating and obtaining t by the brightness information of the moment and the cached currently displayed target image 02 Backlight noise at the moment.
During the execution of steps B1 through B2, the ambient light sensor in the co-hardware layer of the SCP processor is always in the integration process for one acquisition period.
After step B2, the noise store of the noise algorithm library stores t 01 Image noise sum of time of day t 02 Backlight noise at the moment.
In conjunction with FIG. 9, at t 03 At that moment, the ambient light sensor is still in the integration period, collecting the initial ambient light. At t 03 At that time, the image is refreshed once.
In fig. 10, since the image is refreshed, the steps C1 to C6 are continuously performed, and the steps C1 to C6 can refer to the descriptions in a1 to a6, and are not repeated herein.
During the execution of steps C1 through C6, the ambient light sensor in the co-hardware layer of the SCP processor is still in the integration process for one acquisition period.
After step C6, the noise memory of the noise algorithm library stores t 01 Image noise at time t 02 Backlight noise and t at time 03 Image noise at time instants.
Referring to FIG. 9, at t 04 At that moment, the ambient light sensor is still in the integration period, collecting the initial ambient light. At t 04 Time of day, figureLike a refresh.
In fig. 10, since the image is refreshed, the steps D1 to D6 are continued, and the steps D1 to D6 refer to the descriptions in a1 to a6, which are not repeated herein.
During the execution of steps D1 through D6, the ambient light sensor in the co-hardware layer of the SCP processor is still in the integration process for one acquisition period.
After step D6, the noise memory of the noise algorithm library stores t 01 Image noise at time t 02 Backlight noise at time t 03 Image noise and t at time 04 Image noise at time instants.
In conjunction with FIG. 9, at t 1 At this time, the current integration of the ambient light sensor is finished, and the integration of the ambient light sensor is finished (t) 1 Time), the ambient light sensor obtains the initial ambient light, and in fig. 10, the SCP processor starts executing step E2, step E3, and step E4, and transmits the correlation data (raw value, integration start time, and integration end time on the four channels of the RGBC) of the initial ambient light to the HWC of the hardware abstraction layer of the AP processor.
In conjunction with FIG. 9, during non-integration periods, the image may also be refreshed (e.g., t 11 Temporal image refresh), brightness may also change (e.g., t 12 The brightness changes at the moment). Therefore, in the non-integration period, step F1 to step F6 still exist in fig. 10 (step F1 to step F5 in fig. 10 are omitted, and specifically, step a1 to step a5 may be referred to), so that t is t 11 The image noise at the time is stored in a noise memory of a noise algorithm library. In the non-integration period, there are still steps G1 to G2 (step G1 in fig. 9 is omitted, and specifically, refer to step B1) so that t is 12 The backlight noise at a time is stored in a noise memory of a noise algorithm library.
At step a 6', the HWC in the hardware abstraction layer of the AP processor sends the initial ambient light data to the noise algorithm library. And the noise algorithm library calculates and obtains the target ambient light according to the data of the initial ambient light and the image noise and the backlight noise which interfere with the initial ambient light.
As can be understood from fig. 10, the integration start time and the integration end time of the ambient light sensor are controlled by the corresponding clocks of the ambient light sensor; the process of calculating the image noise by the AP processor is controlled by an image refreshing clock; the process of the AP processor calculating the backlight noise is controlled by the timing of the backlight adjustment. Therefore, the execution of step a1 (or, step C1, step D1, step F1) is all triggered by an image refresh. The execution of step B1 (or step G1) is triggered by a brightness adjustment. The integration start time and the integration end time of the ambient light sensor are completely performed according to a preset acquisition period and each integration duration. Thus, the execution of step E2 is triggered by the event that the ambient light sensor integration ends.
From the triggering event perspective, these three processes are completely independent. However, the results obtained by these three processes (image noise, backlight noise, and initial ambient light) are correlated by the denoising process after the ambient light sensor integration period ends. The initial ambient light fused in the denoising process is the initial ambient light collected by the ambient light sensor in the current collection period, and the image noise and the backlight noise removed in the denoising process are image noise and backlight noise which can cause interference on the initial ambient light collected in the current collection period.
The embodiment of the application can obtain by analyzing the structure of the ambient light under the screen: factors disturbing the ambient light collected by the ambient light sensor include the display content of the display area directly above the photosensitive area of the ambient light sensor and the display area directly above a certain area around the photosensitive area of the ambient light sensor. The display content is divided into two parts: RGB pixel information and luminance information of the display image. Therefore, the noise calculation library in the embodiment of the present application obtains the fusion noise according to the fusion of the RGB pixel information and the luminance information of the target image. Then, integral noise of an integral period of the initial ambient light is obtained from the fusion noise. The target ambient light is obtained by removing integral noise that interferes with the initial ambient light from the initial ambient light obtained from the ambient light sensor integration period. Because the interference part is removed, accurate target environment light can be obtained, and the universality is strong.
In addition, since the AP processor of the electronic device can obtain the target image and the luminance information, accordingly, the AP processor obtains image noise and backlight noise. The SCP processor may obtain initial ambient light. Thus, the SCP processor may send the initial ambient light to the AP processor, where the initial ambient light and the fusion noise are processed by the AP processor to obtain the target ambient light. The problem that the AP processor frequently sends a target image (or image noise) and brightness information (or backlight noise) to the SCP processor, and the inter-core communication is too frequent and the power consumption is large is solved.
Furthermore, the DSS in the AP processor may store the image before display (the image to be displayed in the current refresh of the display screen) in the CWB memory. The HWC in the AP processor extracts a target image from an image before display sending stored in the CWB memory so as to calculate and obtain fusion noise, and the fusion noise obtained by the method is accurate and has low power consumption.
It should be noted that, in the case of displaying an image on the display screen, the brightness of the display screen needs to be adjusted according to the target ambient light. In the case where the display screen does not display any image, it is not necessary to adjust the brightness of the display screen in accordance with the target ambient light. Therefore, the AP processor also needs to monitor the display screen for on and off screen events. When the screen is bright, the method for detecting the target ambient light provided by the embodiment of the application is executed. Upon being off, the AP processor may not perform steps a4 through a 6. Similarly, the SCP processor may also control the ambient light sensor to stop collecting the initial ambient light when the screen is turned off, and the SCP processor may not perform steps E2 to E5.
To provide a clearer understanding of the execution inside the AP processor, a timing diagram between various modules inside the AP processor will be described, which is obtained by obtaining t in the embodiment shown in fig. 10 01 Image noise at time t 02 The description is made taking the backlight noise at the time as an example.
In the embodiment shown in fig. 11, when refreshing an image, the respective modules in the AP processor perform the following steps:
step 1100, after the display engine service obtains the display parameters of the interface to be displayed from the application in the application layer, the display engine service sends the display parameters of the interface to be displayed to the surface flicker.
In step 1101, after the surfefinger obtains the display parameters of the interface to be displayed of the application a from the display engine service, the display parameters (e.g., memory address, color, etc.) of each interface (the interface to be displayed of the application a, the status bar interface, etc.) are sent to the HWC through the interfaces (e.g., setLayerBuffer, setLayerColor).
In step 1102, after the HWC receives the display parameters of each interface, the HWC obtains a synthesized image according to the display parameters of the interface to be displayed by the hardware on the bottom layer of the HWC.
In step 1103, the HWC obtains the image synthesized by the hardware on the bottom layer, and sends the synthesized image to the OLED driver.
And step 1104, after receiving the synthesized image sent by the HWC, the OLED driver sends the synthesized image to the display subsystem.
Step 1105, after the display subsystem receives the synthesized image, it performs a secondary processing on the synthesized image to obtain the image before display.
At step 1106, the display subsystem stores the pre-display image in the CWB memory.
It should be noted that, since the OLED screen needs to refresh the image, the display subsystem needs to send the image before being sent to the display screen for display.
In the embodiment of the application, the step of sending the image before display to the display screen by the display subsystem for display and the step of storing the image before display in the CWB memory by the display subsystem are two independent steps, and the order is not strict.
In step 1107, after the display subsystem successfully stores the pre-rendered image in the CWB memory, it may send a store success signal to the HWC.
In step 1108, after receiving the signal that the storage is successful, the HWC performs matting to obtain the target image from the image before display stored in the CWB memory, and the time when the HWC starts to obtain the target image is used as the timestamp of the target image.
In step 1109, the HWC sends the target image and the timestamp to the noise algorithm library after acquiring the target image and the timestamp.
Step 1110, the noise algorithm library calculates and obtains the image noise (t) at the refresh time corresponding to the target image 01 Image noise at the time of day). The timestamp of the image noise is the timestamp of the target image from which the image noise is obtained. A noise algorithm bank stores the image noise and a timestamp of the image noise.
During brightness adjustment, each submodule in the AP processor executes the following steps:
and 1111, after the display engine service obtains the brightness to be adjusted from the application a in the application layer, the display engine service sends the brightness to be adjusted to the kernel node.
In step 1112, the HWC acquires the brightness to be adjusted from the core node after monitoring that the data in the core node changes. The time when the HWC executes the retrieval of the brightness to be adjusted from the kernel node is a time stamp of the brightness to be adjusted.
In practical applications, the HWC always listens to the kernel node for data changes.
In step 1113, the HWC sends the adjusted brightness and the timestamp of the brightness to be adjusted to the noise algorithm library.
Step 1114, the noise algorithm library calculates the backlight noise (t) at the adjustment time to obtain the brightness to be adjusted 02 Backlight noise at the moment). The timestamp of the backlight noise is the timestamp of the brightness to be adjusted of the backlight noise. A noise algorithm base stores the backlight noise and a time stamp of the backlight noise.
After the end of an integration period, the SCP processor sends the initial ambient light collected during the integration period to the HWC in the AP processor.
In step 1115, the HWC of the AP processor receives the initial ambient light sent by the SCP processor and the integration start time and the integration end time of the initial ambient light.
In step 1116, the HWC sends the initial ambient light and the integration start time and the integration end time of the initial ambient light to the noise algorithm library after receiving the initial ambient light sent by the SCP processor and after receiving the integration start time and the integration end time of the initial ambient light.
In step 1117, the noise algorithm library calculates the integration noise according to the image noise and the corresponding timestamp, the backlight noise and the corresponding timestamp and the integration start time and the integration end time of the initial ambient light. And the noise algorithm library calculates and obtains backlight noise according to the integral noise and the initial ambient light.
The embodiment of the application mainly describes a sequential logic diagram among modules when the AP processor obtains target ambient light.
In the above embodiments, the example is that after the AP processor acquires the target image and the luminance information, the AP processor calculates the fusion noise, and after the SCP processor acquires the initial ambient light, the SCP processor sends the initial ambient light to the AP processor, and the AP processor processes the fusion noise to acquire the integral noise of the integral time period of the initial ambient light, and then acquires the target ambient light according to the initial ambient light and the integral noise.
In practical application, the AP processor may also send the target image and the brightness information to the SCP processor after obtaining the target image and the brightness information. The SCP processor fuses the target image and the brightness information to obtain fusion noise and integral noise of an integral time period of the initial ambient light, and then obtains the target ambient light according to the fusion noise and the initial ambient light.
In practical application, after the AP processor acquires the target image and the brightness information, the AP processor calculates the fusion noise and sends the fusion noise obtained by calculation to the SCP processor. The SCP processor obtains integral noise of an integral time period according to the received fusion noise, and obtains target ambient light according to the integral noise of the integral time period and initial ambient light collected by the ambient light sensor.
Referring to fig. 12, the fusion noise is calculated and obtained at the AP processor according to the embodiment of the present disclosure; and calculating integral noise at the SCP processor, and obtaining target ambient light according to the integral noise and the target ambient light.
As mentioned above, the process of obtaining the target ambient light can be briefly described as follows:
step 1, calculating image noise according to a target image.
And 2, calculating backlight noise according to the brightness.
And 3, calculating target ambient light (raw values on four channels) according to the image noise, the backlight noise and the initial ambient light.
In the technical architecture shown in fig. 7, step 3, the target ambient light is calculated according to the image noise, the backlight noise and the initial ambient light, and is implemented in the noise algorithm library of the AP processor. The noise algorithm library of the AP processor can calculate the image noise and the backlight noise. The initial ambient light is derived from the driving of the ambient light sensor of the SCP processor. Therefore, the AP processor noise algorithm library needs to obtain the initial ambient light related data reported by the SCP processor (steps E3 to E4). The AP processor finally returns the calculated values on the four channels of the target ambient light to the SCP processor to obtain the Lux value of the target ambient light (step A8, step E5, step E6).
In the technical architecture shown in fig. 12, step 3, the target ambient light is calculated according to the image noise, the backlight noise and the initial ambient light, and is implemented in the denoising module of the SCP processor. The image noise and backlight noise are acquired by the AP processor and the initial ambient light is acquired by the ambient light sensor drive of the SCP processor. Therefore, the denoising module of the SCP processor needs to acquire the image noise and the backlight noise transmitted by the AP processor (step A8, step E5, step E6), and also needs the ambient light sensor of the SCP processor to drive the transmitted initial ambient light (step E3).
In view of the above analysis, in the technical architecture shown in fig. 7, the calculations of step 1 to step 3 need to be implemented in the noise algorithm library of the AP processor. In the technical architecture shown in fig. 12, step 1 and step 2 need to be implemented in the noise algorithm library of the AP processor, and step 3 needs to be implemented in the computation module of the SCP processor.
For a clearer understanding of the process of obtaining the target ambient light corresponding to the technical architecture shown in fig. 12, reference is made to a timing chart shown in fig. 13. From t in connection with events at various times in FIG. 9 0 The moment begins and the image is refreshed. At the same time, the ambient light sensor is advancedAnd entering an integration period, and starting to collect initial ambient light.
Accordingly, in FIG. 13, step E1, the ambient light sensor in the co-hardware layer of the SCP processor enters an integration period from t 0 (t 01 ) The initial ambient light is collected at the beginning of the time.
Steps A1 through A6 refer to the description of steps A1 through A6 in the example of FIG. 7.
Step A7, noise algorithm library in hardware abstraction layer in AP processor will t 01 The image noise at that moment is sent to the HWC of the same layer.
Step A8, calculating t at AP processor 01 After the image noise of the moment, t 01 The image noise at the moment is sent to the ambient light sensor application of the co-application layer of the SCP processor.
Step A9 (step E5 in the architecture shown in FIG. 12), the ambient light sensor application of the co-application layer of the SCP processor will t 01 And the image noise at the moment is sent to a noise memory of the SCP processor co-driving layer.
Steps B1 through B2 refer to the description of steps B1 through B2 in the embodiment of FIG. 7.
Step B3, noise algorithm library in hardware abstraction layer in AP processor will t 02 The backlight noise at the moment is sent to the HWC of the same layer.
Step B4, calculating t at AP processor 02 After the backlight noise of the moment, t 02 The backlight noise at the moment is sent to the ambient light sensor application of the co-application layer of the SCP processor.
Step B5 (step E5 in the architecture shown in FIG. 11), ambient light sensor application t of the cooperative application layer of the SCP processor 02 The backlight noise at the moment is sent to a noise memory of the SCP processor co-driving layer.
The steps C1 to C9, and the steps D1 to D9 refer to the descriptions of the steps a1 to a9, which are not repeated herein.
After the ambient light sensor integration is over, the SCP processor is triggered to perform step E2, step E2 as described with reference to the embodiment shown in fig. 7.
And E3 to E6, the SCP processor cooperates with a denoising module in the driving layer to take out the fusion noise from the noise memory of the layer, and raw values on four channels of the initial ambient light are obtained from the ambient light sensor of the layer. And calculating according to raw values on four channels of the initial ambient light and image noise and backlight noise which interfere with the initial ambient light to obtain the target ambient light. During non-integration periods, the image may also be refreshed (e.g., t) 11 Temporal image refresh), brightness may also change (e.g., t) 12 The luminance changes at the moment). Therefore, in the non-integration period, steps F1 to F9 still exist in fig. 13 (steps F1 to F5 in fig. 13 are omitted, and specifically, steps a1 to a5 in fig. 13 may be referred to), so that t is t 11 The image noise at the time is stored in a noise memory of the SCP processor. In the non-integration period, there are still steps G1 to G5 (step G1 in fig. 13 is omitted, and specifically, step B1 in fig. 13 may be referred to), so that t 12 The backlight noise at a time is stored in a noise memory of a noise algorithm library.
The process of obtaining the target ambient light from the target image, the brightness, and the initial ambient light calculation by the noise algorithm library in the embodiment shown in fig. 7 will be described below.
Step one, when a noise calculation base obtains a target image, calculating and obtaining image noise at the refreshing time of the target image according to the target image and the brightness of a display screen at the refreshing time of the target image; and when the noise calculation library obtains a brightness, calculating and obtaining the backlight noise at the brightness adjusting time according to the brightness and the target image at the brightness adjusting time.
Although the image noise and the backlight noise are different names, the calculation process is calculated according to a frame target image and a luminance value.
Firstly, weighting and operation are carried out according to the RGB value of each pixel point and the weighting coefficient of each pixel point, and the weighted RGB value of the target image is obtained. And determining the weighting coefficient of each pixel point according to the distance between the coordinate of the pixel point and the reference coordinate of the target image. The coordinates of the center point of the photosensitive area of the ambient light sensor may be used as reference coordinates of the target image.
And step two, the noise calculation library obtains fusion noise according to the weighted RGB value and the brightness of the target image. The fusion noise may be obtained by a table lookup method (in the table, fusion noise corresponding to the weighted RGB value of the target image and the luminance is set), or may be obtained by a preset functional relationship (the independent variable is the weighted RGB value and the luminance of the target image, and the dependent variable is the fusion noise). The fusion noise obtained at this time is a raw value of four channels.
And step three, calculating and obtaining integral noise in the integral time period of the initial environment light by the noise calculation base according to the fusion noise at each moment.
It should be noted that image noise is not generated by the image refresh process itself. In the integration time period, in the time period before the image refreshing, the interference to the initial environment light is the image noise corresponding to the image before the refreshing, and in the time period after the image refreshing, the interference to the initial environment light is the image noise corresponding to the image after the refreshing.
Similarly, the backlight noise is not generated by the process of adjusting the brightness. In the integration time period, in the time period before brightness adjustment, the interference to the initial environment light is backlight noise corresponding to the brightness before adjustment, and in the time period after brightness adjustment, the interference to the initial environment light is backlight noise corresponding to the brightness after adjustment.
As described above, the noise memory stores the image noise and the backlight noise at each time point calculated by the noise algorithm library. The noise stored in the noise memory is collectively referred to as fusion noise or first noise.
A step a1, the first processor fetches the first noise from the exit position of the noise memory through the noise algorithm library, and the first processor updates the exit position of the noise memory or the first noise of the exit position through the noise algorithm library;
step B1, if the timestamp corresponding to the currently fetched first noise is before the first time or the first time, the first processor continues to execute step A1 through the noise algorithm library until the currently fetched first noise is after the first time;
step B2, if the first noise currently extracted is after the first time, the first processor performs the following steps through the noise algorithm library:
step C1, if the timestamp of the currently extracted first noise is after the first time for the first time and before the second time, calculating and obtaining the integrated noise between the first time and the time corresponding to the timestamp of the currently extracted first noise according to the last extracted first noise, and continuing to execute from step a 1;
step C2, if the timestamp of the currently extracted first noise is after the first time for the first time and after the second time or the second time, calculating to obtain the integral noise between the first time and the second time according to the last extracted first noise, and continuing to execute step D1;
step C3, if the timestamp of the first noise currently taken out is not after the first time and before the second time, calculating, according to the first noise taken out last time, to obtain an integrated noise between a time corresponding to the timestamp of the first noise taken out last time and a time corresponding to the timestamp of the first noise currently taken out; and continues from step a 1;
step C4, if the timestamp of the first noise taken out at present is not after the first time and is after the second time or the first time, calculating the integral noise between the time corresponding to the timestamp of the first noise taken out at last time and the second time according to the first noise taken out at last time, and continuing to execute step D1;
and D1, obtaining a second value according to the integral noise between the first time and the second time and the first value.
When the noisy memory is a fifo (First Input First output) memory. The FIFO memory is a first-in first-out double-port buffer, one of two ports of the memory is an input port of the memory, and the other port of the memory is an output port of the memory. In the structure of the memory, the data which enters the memory firstly is shifted out, and correspondingly, the sequence of the shifted-out data is consistent with the sequence of the input data. The outlet position of the FIFO memory is the storage address corresponding to the output port of the FIFO memory.
When the FIFO memory shifts out a datum, the process is as follows: the fusion noise stored in the exit position is removed from the exit position (first position), and then the data in the second position from the exit position is moved to the exit position, and the data in the third position from the exit position is moved to the second position from the exit position, … … in turn.
Of course, in practical applications, after the fused noise stored at the first position (a1) is removed from the exit position (first position, a1), the exit position of the memory may be updated to the second position (a 2). After the fusion noise stored at the current exit position (a2) is removed again, the exit position of the memory is continuously updated to the third position (A3) … ….
The process of obtaining the second value based on the above-described calculation may refer to the embodiment described with reference to fig. 14 to the embodiment shown in fig. 16.
Referring to fig. 14, fig. 14 is a process of calculating integral noise according to image noise and backlight noise by the noise calculation library in the AP processor provided in the embodiment of the present application. The various times in the process may be compared to the descriptions of the various times in the embodiments shown in fig. 9 and 10: at t 01 Refreshing the image at all times to obtain t 01 Image noise at a time; at t 02 Adjusting brightness at a moment to obtain t 02 Backlight noise at a moment; at t 03 Refreshing the image at all times to obtain t 03 Image noise at a time; at t 04 Refreshing the image at all times to obtain t 04 Image noise at time instants.
From t 01 Time to t 02 At the moment, the displayed image is t 01 The brightness of the display screen of the image after the moment refreshing is t 01 Brightness at time (t) 01 The brightness at the moment is the brightness value stored in the noise algorithm library at t 01 Brightness value most recently stored before the time), t 01 Temporal image noiseIs t 01 The brightness of the image after the moment refreshing on the display screen is t 01 Noise in the case of the brightness of the time instant. Thus, the initial ambient light includes a duration of "t 02 -t 01 ", time stamp t 01 The image noise of (1).
From t 02 Time to t 03 At the moment, the brightness of the display screen is t 02 The brightness after the moment adjustment is t, the image displayed by the display screen 01 Image after temporal refresh, t 02 Backlight noise at time t 02 The brightness after the moment adjustment is displayed on the display screen to be t 01 Noise in the case of time-adjusted images. Thus, the initial ambient light includes a duration of "t 03 -t 02 ", time stamp t 02 Backlight noise at the moment.
From t 03 Time to t 04 At the moment, the displayed image is t 03 The brightness of the display screen of the image after the moment refreshing is t 02 Brightness adjusted at time t 03 The image noise at time t 03 The brightness of the image after the moment refreshing on the display screen is t 02 Noise in the case of the luminance after the time adjustment. Thus, the initial ambient light includes a duration of "t 04 -t 03 ", time stamp t 03 The image noise of (1).
From t 04 Time to t 1 At the moment, the displayed image is t 04 The brightness of the display screen of the image after the moment refreshing is t 02 Brightness adjusted at time t 04 The image noise at time t 04 The brightness of the image refreshed at any moment on the display screen is t 02 Noise in the case of the luminance after the time adjustment. Thus, the initial ambient light includes a duration of "t 1 -t 04 ", time stamp t 04 The image noise of (1).
Based on the above understanding, the AP processor, when calculating the integral noise:
t 01 image noise pair t of time 01 Time to t 02 The initial ambient light at the moment causes interference;
t 02 temporal backlight noiseFor t 02 Time to t 03 The initial ambient light at the moment causes interference;
t 03 image noise pair t of time 03 Time to t 04 The initial ambient light at the moment causes interference;
t 04 image noise pair t of time 04 Time to t 1 The initial ambient light at the moment causes interference.
Thus, t can be calculated separately 01 Time to t 02 Integral noise at time t 02 Time to t 03 Integral noise at time t 03 Time to t 04 Integral noise at time, t 04 Time to t 1 Integral noise at time instants.
For t 01 Time to t 02 The integral noise at time is:
Figure BDA0003112246870000261
for t 02 Time to t 03 The integral noise at time is:
Figure BDA0003112246870000262
for t 03 Time to t 04 The integral noise at time is:
Figure BDA0003112246870000263
for t 04 Time to t 1 The integral noise at time is:
Figure BDA0003112246870000264
wherein the content of the first and second substances,
Figure BDA0003112246870000265
represents t 01 The fusion noise at the time of day,
Figure BDA0003112246870000266
represents t 02 The fusion noise at the time of day is,
Figure BDA0003112246870000267
represents t 03 The fusion noise at the time of day is,
Figure BDA0003112246870000268
denotes t 04 Fusion noise at time.
While each sub-period (t) within the integration period 01 To t 02 ,t 02 To t 03 ,t 03 To t 04 ,t 04 To t 1 ) The integrated noise of (a) together is the integrated noise of the whole integration period.
The start time of the integration period in the above example is exactly the time of the image refresh, i.e., the image noise at the start time of the integration period can be obtained.
In practical applications, it is possible that the start time of the integration period is not the time of image refresh nor the time of backlight adjustment. In this case, it is necessary to acquire the fusion noise corresponding to the change time (image refresh time or backlight adjustment time) that is the latest before the start of the current integration period.
Referring to fig. 15, an integration time period (t) is obtained for a noise calculation library in an AP processor provided in an embodiment of the present application 01 Time to t 1 Time of day), t 01 The time is not the starting time of the current integration time period, but is the image refreshing time of one time in the current integration time period. The latest change time (image refresh time or brightness adjustment time) before the start of the current integration period is t -1 The time is an image refresh time.
Referring to fig. 16, if the latest change time before the start of the current integration period is the image refresh time, the image noise corresponding to the image refresh time will be t 0 Time to t 01 The initial ambient light at the moment causes interference.
Of course, if the latest change time is the brightness adjustment time, the backlight noise corresponding to the brightness adjustment time will be t 0 To t 01 The initial ambient light at the moment causes interference.
In the embodiment shown in fig. 16, the integration noise corresponding to each sub-period in the integration period is:
for t 0 Time to t 01 The integral noise at time is:
Figure BDA0003112246870000269
for t 01 Time to t 02 The integral noise at time is:
Figure BDA00031122468700002610
for t 02 Time to t 03 The integral noise at time is:
Figure BDA00031122468700002611
for t 03 Time to t 04 The integral noise at time is:
Figure BDA00031122468700002612
for t 04 Time to t 1 The integral noise at time is:
Figure BDA00031122468700002613
wherein the content of the first and second substances,
Figure BDA0003112246870000271
represents t -1 The fusion noise at the time of day is,
Figure BDA0003112246870000272
represents t 01 The fusion noise at the time of day is,
Figure BDA0003112246870000273
represents t 02 The fusion noise at the time of day is,
Figure BDA0003112246870000274
represents t 03 The fusion noise at the time of day is,
Figure BDA0003112246870000275
represents t 04 Fusion noise at time.
As can be understood from the above example, the obtained integral noise is also a raw value on four channels.
The timestamps in the above examples are different, and in practical applications, the HWC may perform both the process of obtaining the target image and the process of obtaining the brightness to be adjusted within one time measurement unit (e.g., within 1 ms). However, the time stamp of the target image acquired at this time and the brightness to be adjusted are the same.
If a target image and a brightness value with the same timestamp exist and the noise algorithm library receives the target image first, the noise algorithm library calculates image noise according to the latest brightness value before the target image and the target image, and calculates backlight noise according to the target image and the brightness value with the same timestamp when calculating backlight noise corresponding to the brightness value;
if the target image and the brightness value with the same timestamp exist and the noise algorithm library receives the brightness value first, the noise algorithm library calculates backlight noise according to the brightness value and the latest target image before the brightness value, and when image noise corresponding to the target image is calculated, image noise is calculated according to the target image and the brightness value with the same timestamp.
The noise algorithm library firstly receives a target image, then calculates to obtain image noise, and firstly stores the image noise to a noise memory. The fusion noise stored in the noise memory has a time sequence, that is, before being stored in the noise memory, whether the fusion noise to be stored currently is behind the timestamp of the fusion noise stored last time is judged, if the fusion noise to be stored currently is behind the timestamp of the fusion noise stored last time, the fusion noise to be stored currently is stored, and if the fusion noise to be stored currently is before or the same as the timestamp of the fusion noise stored last time, the noise to be stored currently is discarded. Therefore, the backlight noise obtained by the post-calculation is discarded.
In practical applications, the time stamp of the target image may be the time when the HWC starts to execute the target image fetch from the CWB write-back memory. The timestamp of the luminance value may be the time at which the HWC started to retrieve the luminance value from the kernel node as the timestamp of the luminance value. The HWC may switch to capture luminance values during the process of capturing the target image. Therefore, the HWC performs the capturing of the target image first and then the capturing of the luminance value, and the timestamp of the luminance value is later than the timestamp of the target image. In practical applications, the HWC may obtain the luminance value and send the luminance value to the noise algorithm library, and the noise algorithm library calculates the backlight noise and stores the backlight noise. And obtaining a target image after HWC and sending the target image to a noise algorithm library, and calculating by the noise algorithm library to obtain image noise and storing the image noise. This results in the time at which the timestamp of the image noise is currently ready to be stored being before the time at which the timestamp of the backlight noise was last stored.
And step four, removing integral noise of the whole integral time period from the initial environment light by a noise algorithm library to obtain the target environment light.
In the embodiment of the present application, the initial ambient light sent by the SCP processor to the HWC of the AP processor is initial ambient light data in the form of RGBC four-channel raw values. The HWC sends the initial ambient light data, also in the form of RGBC four-channel raw values, to the noise algorithm library. The raw values over the four channels of the integrated noise are obtained in step three. Therefore, in this step, the raw value on the four channels of the target ambient light can be obtained by performing an operation on the raw value of the four channels of the initial ambient light and the raw value of the integrated noise four channels.
After the raw values on the four channels of the target ambient light are obtained through calculation, the noise algorithm library can send the raw values on the four channels of the target ambient light to the SCP processor, and the SCP processor obtains the lux value of the target ambient light through calculation according to the raw values on the four channels of the target ambient light.
As an example, the lux value may be weighted according to the raw value of each channel multiplied by a coefficient of each channel (which may be provided by the manufacturer of the ambient light sensor).
As previously described, the display subsystem may store the image to be displayed in the CWB memory, from which the HWC obtains the target image. The display subsystem may also store a region image including the target image (see the region image including the target image shown in fig. 17) or a target image (see the target image shown in fig. 17) in the CWB memory, and the HWC may obtain the target image from the CWB memory.
The target image is located in the middle of the status bar area in the embodiment shown in fig. 17. The size and the position of the target image in the embodiment shown in fig. 17 are only examples, and in practical applications, other sizes and other positions may also be used.
The size and position of the region images in the embodiment shown in fig. 17 are for example only. In practical applications, the size and position of the region image may be determined based on the size and position of the target image. The size and the position of the area image are not limited in the embodiment of the application.
The electronic device refreshes the image, and the image displayed on the display screen before refreshing may be different from the image displayed on the display screen after refreshing. However, the target image on the image displayed on the display screen before the refresh and the target image on the image displayed on the display screen after the refresh do not necessarily have a difference.
As described with reference to fig. 18, in the interface displayed on the display screen of the electronic device in the embodiment shown in fig. 18, a video is being played, and the position of the target image on the display screen is in the status bar area. During the playing of the video, the content in the video is switched from the content shown in fig. 18 to the content shown in fig. 19.
As can be understood from fig. 18 and 19, the image displayed on the display screen in fig. 18 (which may be understood as an image before refresh) and the image displayed on the display screen in fig. 19 (which may be understood as an image after refresh) are different. However, the target image corresponding to the image displayed on the display screen in fig. 18 is the same as the target image corresponding to the image displayed on the display screen in fig. 19.
If the target image A and the target image B are the same under the condition that the electronic equipment is refreshed into the image B (the target image corresponding to the image B is the target image A) from the image A (the target image corresponding to the image A is the target image A); and the brightness of the display screen is not changed from the process that the electronic equipment displays the image A to the process that the electronic equipment displays the image B. The image noise corresponding to the target image a and the image noise corresponding to the target image B are not changed.
Therefore, in order to reduce power consumption, in the embodiment of the present application, when the brightness value of the display screen is not changed, it may be determined whether the target image corresponding to the current refresh image and the target image obtained last time are changed. And if the image is changed, calculating image noise corresponding to the target image obtained by refreshing the image at this time by the noise algorithm library. And if the image noise does not change, the noise algorithm library does not calculate and obtain the image noise corresponding to the target image obtained by refreshing the image at this time.
As previously described, the target image is retrieved by the HWC from the CWB memory. The target image may be used to compute the resulting image noise. The steps from the AP processor starting to acquire the target image to computing the image noise from the target image may be as follows:
in step 11, the HWC obtains the target image from the CWB memory.
In step 12, the HWC sends the target image or the memory address of the target image to the noise algorithm library.
And step 13, the noise algorithm library receives the target image or the storage address of the target image.
And 14, calculating by the noise algorithm library according to the target image and the cached latest stored brightness value to obtain image noise.
Therefore, the process of determining whether the target image corresponding to the current refresh image and the target image obtained last time have changed may be any time from "the end time of step 11" to "the start time of step 14".
As an example, after acquiring the target image from the CWB memory, the HWC compares the target image acquired this time with the target image acquired last time.
As another example, after acquiring the target image from the CWB memory, the HWC sends the target image acquired this time or the storage address of the target image to the noise algorithm library, and the noise algorithm library compares the currently received target image (or the target image stored in the currently received storage address) with the target image received last time (or the target image stored in the storage address received last time).
In order to further reduce power consumption, in the embodiment of the present application, after the HWC acquires the target image from the CWB memory, the target image acquired this time may be compared with the target image acquired last time. To avoid additional power consumption resulting from the need to subsequently pass the target image or the memory address of the target image to the noise algorithm library. In a specific implementation process, the HWC module may cache the target image obtained this time and the target image obtained last time. When the HWC caches the target image obtained this time, the target image obtained last time may be covered, and reference may be specifically made to the description of the subsequent embodiment.
Subsequent embodiments of the application are described with the HWC sending the memory address of the target image to the noise algorithm library as an example.
As an example, if the HWC monitors that the data stored in the core node changes, the HWC obtains the changed luminance value from the core node and sends the changed luminance value to the noise algorithm library, and the noise algorithm library caches the changed luminance value as the latest stored luminance value.
After the HWC sends the changed luminance values to the noise algorithm library:
referring to fig. 20, the HWC first detects a signal that a target image needs to be retrieved from the CWB memory, the HWC retrieves the target image from the CWB memory, and the HWC may store the first retrieved target image 1 in the a buffer. The HWC sends the memory address of the target image 1 (address of the a buffer) to the noise algorithm library. The noise algorithm library caches a storage address (address of an a buffer) of the target image 1 acquired for the first time as a storage address of the target image stored latest. And the noise algorithm library acquires the target image 1 acquired for the first time according to the received storage address, and calculates and acquires the image noise corresponding to the target image 1 acquired for the first time according to the target image 1 acquired for the first time and the latest stored brightness value cached.
The HWC monitors a signal that a target image needs to be acquired from the CWB memory for the second time, the HWC acquires a target image 2 from the CWB memory, the HWC may store the target image 2 acquired for the second time in the B buffer, and the HWC compares whether the target image stored in the a buffer (the target image 1 acquired last time) is the same as the target image stored in the B buffer (the target image 2 acquired this time).
In the case where the target image 1 stored in the a buffer is the same as the target image 2 stored in the B buffer, the HWC does not send the storage location (address of the Bbuffer) of the target image 2 stored in the B buffer to the noise algorithm library any more. The noise algorithm library also does not calculate the image noise corresponding to the target image 2 obtained for the second time.
In the case where the target image 1 stored in the a buffer and the target image 2 stored in the B buffer are not the same, the HWC transmits the storage location (address of the Bbuffer)) of the target image 2 stored in the B buffer to the noise algorithm library. The noise algorithm library buffers the storage address (the address of the Bbuffer) of the target image 2 acquired for the second time as the address of the target image stored most recently. And the noise algorithm library acquires the target image 2 according to the received storage address (Bbuffer address), and then calculates and acquires the image noise corresponding to the target image 2 acquired for the second time according to the target image 2 and the latest stored brightness value of the cache.
The HWC monitors a signal that a target image needs to be acquired from the CWB memory for the third time, the HWC acquires a target image 3 from the CWB memory, the HWC may store the target image 3 acquired for the third time in the a buffer, and the HWC compares whether the target image stored in the B buffer (the target image 2 acquired last time) and the target image stored in the Abuffer (the target image 3 acquired this time) are the same. The HWC overwrites the target image stored last time with the currently stored target image each time the target image is stored in the buffer. I.e. the target image 3 is stored in the a buffer, it will overwrite the target image 1 stored in the Abuffer.
In the case where the target image 2 stored in B buffer is the same as the target image 3 stored in a buffer, the HWC does not send the storage location (address of Abuffer) of the target image 3 stored in a buffer to the noise algorithm library any more. The noise algorithm library no longer calculates the image noise corresponding to the target image 3 obtained for the third time.
In the case where the target image 2 stored in B buffer and the target image 3 stored in a buffer are not the same, the HWC transmits the storage location (address of Abuffer) of the target image 3 stored in a buffer to the noise algorithm library. The noise algorithm library caches the storage address (address of the buffer) of the target image 3 acquired for the third time as the storage address of the target image stored most recently. And the noise algorithm library acquires the target image 3 acquired for the third time according to the received storage address (the address of the Abuffer), and then calculates and acquires the image noise corresponding to the target image 3 acquired for the third time according to the target image 3 acquired for the third time and the latest stored brightness value of the cache.
The HWC monitors a signal that a target image needs to be acquired from the CWB memory for the fourth time, the HWC acquires the target image 4 from the CWB memory, the HWC may store the target image 4 acquired for the fourth time in B buffer, and the HWC compares whether the target image stored in Abuffer (the target image 3 acquired for the last time) is the same as the target image stored in Bbuffer (the target image 4 acquired this time).
Subsequent steps are omitted, and specific reference may be made to the description in the above embodiments.
And circulating according to the above manner until the HWC monitors that the data in the kernel node (the data stored in the kernel node is the brightness value of the display screen) changes, acquiring the brightness after the change from the kernel node by the HWC, and sending the brightness after the change to the noise algorithm library.
According to the embodiment of the application, an image received by the HWC for the first time after the brightness of the display screen is monitored to be changed and the brightness value of the display screen is obtained can be recorded as a first image; correspondingly, the target image obtained in the first image is marked as the first target image. Marking a target area corresponding to the target image as a first area; and marking the image noise obtained according to the first target image and the brightness value as first image noise.
And recording an image received by the HWC for the second time as a second image, correspondingly, recording a target image obtained from the second image as a second target image, and recording image noise obtained according to the second target image and the luminance value as second image noise. And after receiving the second image, the display subsystem records the image stored in the CWB memory as a third image, wherein the third image can be the second image, a second target image or an area image containing the second target image on the second image.
An image received by the HWC for the third time is denoted as a fourth image, correspondingly, a target image obtained from the fourth image is denoted as a third target image, and image noise obtained according to the third target image and the luminance value is denoted as third image noise. And after receiving the fourth image, the display subsystem records the image stored in the CWB memory as a fifth image, wherein the fifth image can be the fourth image, the third target image or an area image containing the third target image on the fourth image.
After the HWC sends the luminance value after the current change to the noise algorithm library, the noise algorithm library takes the currently received luminance value as the latest stored luminance value. And the noise algorithm library obtains a target image according to the storage address of the newly stored target image and obtains backlight noise corresponding to the brightness value according to the currently received brightness value and the target image.
At the same time, the HWC continues to wait for a signal to retrieve the target image from the CWB memory. If the signal of acquiring the target image from the CWB memory is monitored, the signal is a signal that the HWC needs to acquire the target image from the CWB memory for the first time, and the step of acquiring the target image from the CWB memory for the first time in the above embodiment is executed in a loop.
In the above embodiment, only two buffers are needed to implement the above loop process. As the HWC retrieves the target image from the CWB memory, it is possible that the currently retrieved target image is the target image retrieved the mth time. The HWC may determine in which buffer the current mth acquired target image is stored according to the following condition:
and if m% 2 is equal to 1, storing the target image acquired at the m-th time to an A buffer.
And if m% 2 is 0, storing the target image acquired at the m-th time to the B buffer.
As an example, when m is 1, 1% 2 is 1, and the 1 st acquired target image is stored in an a buffer;
when m is 2, 2% 2 is 0, and storing the 2 nd acquired target image to B buffer;
when m is 3, 3% 2 is 1, and the 3 rd acquired target image is stored in an Abuffer;
when m is 4, 4% 2 is 0, and the 4 th acquired target image is stored in B buffer.
In the embodiment of the application, the buffer stored in the first target image obtained for the first time after the brightness of the display screen is monitored to be changed and the brightness value of the display screen is obtained can be recorded as the first storage space. Correspondingly, the buffer stored in the second target image is recorded as a second storage space, and the target image obtained in each subsequent time is alternately stored in the first storage space and the second storage space.
In the embodiment of the present application, whether the display subsystem stores the whole frame image, the area image, or the target image may be set in advance.
If the setting is as follows: the display subsystem stores the entire frame of image in the CWB memory. The HWC needs to matte out the target image from the full frame image based on its position on the full frame image. Therefore, it is also necessary to set the position of the target image on the entire frame image in advance. When the target image is a square target image, the coordinates of two non-adjacent vertexes of the target image on the whole frame image can be used for representing the position of the target image on the whole frame image. The position of the target image on the whole frame image can also be expressed in terms of coordinates of four vertices of the target image on the whole frame image. The method for determining the position of the target image on the whole frame image refers to the description in the above embodiments, and is not repeated herein.
If the setting is as follows: the display subsystem stores the region image of the partial region in the whole frame image in the CWB memory. The display subsystem needs to acquire the position of the region image on the entire frame of image so that the appropriate region image can be stored in the CWB memory based on the acquired position. The HWC also needs to obtain the position of the region image of the partial region stored in the CWB memory by the display subsystem on the whole frame image and the position of the target image on the whole frame image, so as to calculate the position of the target image on the region image according to the position of the target image on the whole frame image and the position of the region image of the partial region stored in the CWB memory on the whole frame image. The HWC obtains the target image by matting from the region image stored in the CWB memory according to the position of the target image obtained by calculation on the region image. In addition, when the area image is a square area image, the position of the area image on the entire frame image may be expressed by coordinates of two non-adjacent vertices of the square area image. Of course, the coordinates of the four vertices of the square area image may be used to represent the position of the area image on the entire frame image. In practical applications, after storing the region image in the CWB memory, the display subsystem may also transmit the position of the stored region image on the whole frame image when the display subsystem notifies the HWC that the HWC can obtain the information of the target image from the CWB memory.
If the setting is as follows: the display subsystem stores the target image in the entire frame of images in CWB memory. The HWC retrieves the target image directly from the CWB memory. However, the display subsystem needs to acquire the position of the target image on the whole frame image, so as to select the target image from the whole frame image and store the target image in the CWB memory.
In the embodiment of the present application, the display subsystem stores the whole frame image in the CWB memory as an example.
And when the display subsystem stores the whole frame of image in the CWB memory, the whole frame of image is stored according to the sequence of the pixel points in the whole frame of image. As an example, the display subsystem stores the data corresponding to each pixel point in the first row in the whole frame image in the order from left to right, and then the display subsystem stores the data corresponding to each pixel point in the second row in the whole frame image in the order from left to right next to the storage location of the data corresponding to the last pixel point in the first row. … …, until the data corresponding to each pixel point in the last row in the whole frame image is stored in order from left to right. The data corresponding to the pixel point may be the RGB value of the pixel point.
Referring to fig. 21, in the manner described above, the target image may be stored scattered in the CWB memory. Take an image of a target image of 5 (pixels) × 5 (pixels) as an example. Data of each pixel point in a first row in the target image is adjacently stored in a CWB memory (for example, data of a first row of sub-pixels occupies adjacent 5 storage positions in fig. 21), data of each pixel point in a second row in the target image is adjacently stored (for example, data of a second row of sub-pixels occupies adjacent 5 storage positions in fig. 21), data of each pixel point in a third row in the target image is adjacently stored, data of each pixel point in a fourth row in the target image is adjacently stored, and data of each pixel point in a fifth row in the target image is adjacently stored. Of these, the target image of 5 (pixel) × 5 (pixel) is used for example only.
Based on the understanding of fig. 21, when the HWC obtains the target image from the CWB memory, the data of the image pixels may be obtained row by row according to the size of the target image, or the data of the image pixels may be obtained row by row.
Referring to fig. 22, if the target image is an image corresponding to a square region formed by the pixel point of the p-th row and the q-th column in the whole frame image and the pixel point of the p + 89-th row and the q + 89-th column as vertices. Then the pixel points of 4 vertexes of the target image are respectively: the pixel point of the p-th row and the q-th column (the upper left corner in fig. 22), the pixel point of the p-th row and the q + 89-th column (the upper right corner in fig. 22), the pixel point of the p + 89-th row and the q-th column (the lower left corner in fig. 22) and the pixel point of the p + 89-th row and the q + 89-th column (the lower right corner in fig. 22).
The process of the HWC matting and obtaining the target image from the whole frame image stored in the CWB memory may be:
the HWC acquires data representing the pixel point of the p row and the q column to the pixel point of the p row and the q +89 column from the data stored in the CWB memory. In practical applications, when the display subsystem stores the whole frame of image into the CWB memory, the display subsystem may record a storage address of the data of the first pixel point (for example, the pixel point in the first row and the first column) of the whole frame of image in the CWB memory. The display subsystem sends the storage address of the data of the first pixel point of the whole frame image to the HWC module, the HWC module can find the storage address of the data of the last pixel point of the first line according to the storage address of the data of the first pixel point in the whole frame image and the byte length corresponding to the data of each line of pixel points, correspondingly, the HWC module can find the storage address of the data of the first pixel point of the second line according to the storage address of the data of the first pixel point in the whole frame image and the byte length corresponding to the data of each line of pixel points in the whole frame image. According to the above manner, the HWC module may find the storage address of the data of the pixel point in any row and any column in the whole frame image. That is, the HWC module may calculate a storage address of data corresponding to the pixel point in the line p, the line q, the column q, and the column +89 in the line p, so as to obtain data from the pixel point in the line p, the column q, and the column q. The HWC module stores the data of the pixel point in the p-th row and the q-th column to the data of the pixel point in the p-th row and the q + 89-th column in the buffer, and may also record an address of "the data of the pixel point in the p-th row and the q-th column" in the buffer, where the address is an address of the data of the first pixel point in the target image in the buffer, and the address may also be understood as an initial address of the target image in the buffer. Then, the HWC obtains data representing the pixel point in the p +1 th row and the q +89 th column from the data stored in the CWB memory to the pixel point in the p +1 th row and the q +89 th column. The manner in which the HWC obtains the data from the pixel point in the p +1 th row and the q +89 th row to the pixel point in the p +1 th row and the q +89 th row may refer to the manner in which the HWC obtains the data from the pixel point in the p row and the q +89 th row to the pixel point in the p row and the q +89 th row, and is not described herein again.
Then, the HWC obtains the data representing the pixel point in the (p + 2) th row and the (q) th column to the pixel point in the (p + 2) th row and the (q + 89) th column from the data stored in the CWB memory.
……
Until the HWC obtains the data representing the pixel point of the q column of the p +89 th line from the data written back into the memory and stored by the CWB to the pixel point of the q +89 th line of the p +89 th line.
As can be understood from the process of the HWC obtaining the target image from the CWB memory, the HWC can obtain data of one segment (90) of pixels from the CWB memory at a time.
As described above, the HWC may record the storage address of the data of the first pixel in the target image, and since the number of pixels in each row of the target image is the same, the number of bytes occupied by the pixels in each row is also the same. Therefore, the HWC module may obtain the storage address of the data in the buffer of each row of the pixel points based on the address of the data in the buffer of the first pixel point in the target image. Of course, in the embodiment of the present application, storage in the CWB memory or storage in the Abuffer and Bbuffer is performed in a row-by-row order.
The embodiment of the application does not limit the recorded address, and in practical application, the display subsystem may also record the storage address of the first pixel point in each line of pixel points in the whole frame image in the CWB memory and the number of bytes occupied by the data of each line of pixel points in the whole frame image. The HWC module can obtain the data of each row of pixel points in the target image from the CWB memory according to the recorded address reported by the display subsystem. The HWC module may also record the storage address of the first pixel in the buffer of each line of pixels in the target image and the number of bytes occupied by the data of each line of pixels in the target image, as long as the HWC module can obtain the data of each line of pixels from the buffer according to the recorded address.
As can be understood from the above process of the HWC obtaining the target image from the CWB memory, the HWC may compare whether the Abuffer stored target image and the Bbuffer stored target image are the same in the process of performing step 11 in the above embodiment to obtain the target image from the CWB memory.
In the process of comparing whether the target image stored by the Abuffer and the target image stored by the Bbuffer are the same. Taking the example that the HWC stores the target image to the Bbuffer as an example, each time the HWC acquires data of a row of pixel points, the data of the row of pixel points is stored to a corresponding row of the Bbuffer. And then the HWC compares the currently stored data of the pixel points in one row with the data of the pixel points in the corresponding row in the Abuffer. During comparison, the initial storage address of the data of the pixel point of the corresponding row can be obtained according to the storage address of the data of the first pixel point of the target image stored in the Abuffer, and then the data of the pixel point of the corresponding row can be obtained from the Abuffer according to the initial storage address of the data of the pixel point of the corresponding row and the number of bytes occupied by the pixel point of each row.
According to the mode, when the HWC stores the data of one row of the pixel points to the Bbuffer, the HWC compares the data of the newly stored row of the pixel points in the Bbuffer at a time with the data of the pixel points in the corresponding row (the same row as the row where the data newly stored by the Bbuffer is located) in the Abuffer.
Referring to fig. 23, taking as an example that the data of the j-th row of pixel points of the target image B stored by Bbuffer is the same as the data of the j-th row of pixel points of the target image a stored by Abuffer, j is a positive integer from 1 to n-1.
If the data of the pixel points on the nth row in the target image B is stored in the Bbuffer, the HWC acquires the data of the pixel points on the nth row in the target image A stored by the Abuffer, and then compares whether the data of the pixel points on the nth row in the target image A stored by the Abuffer is the same as the data of the pixel points on the nth row in the target image B stored by the Bbuffer.
If the data of the pixel points at the nth row in the target image A stored by the Abuffer is the same as the data of the pixel points at the nth row in the target image B stored by the Bbuffer, the HWC continuously obtains the data of the pixel points at the (n + 1) th row in the target image B from the CWB memory, stores the data of the pixel points at the (n + 1) th row into the Bbuffer, and continuously judges whether the data of the pixel points at the (n + 1) th row in the target image B is the same as the data of the pixel points at the (n + 1) th row in the target image A.
If the data of the pixel points in the nth row in the target image A stored by the Abuffer is different from the data of the pixel points in the nth row in the target image B stored by the Bbuffer, it indicates that the target image A acquired this time is different from the target image B acquired last time. The HWC continues to obtain the data of the n +1 th row of pixel points in the target image B from the CWB memory. After the HWC obtains the data of the n +1 th row of pixels in the target image B from the CWB memory, the HWC may store the data of the n +1 th row of pixels in the target image B to the Bbuffer. However, the HWC does not need to compare the data of the pixel points on the n +1 th line of the target image a stored in the Abuffer with the data of the pixel points on the n +1 th line of the target image B stored in the Bbuffer.
Correspondingly, if the data of the pixel points on the n +2 th row to the pixel points on the 90 th row exist, the HWC does not need to compare the data of the pixel points on the n +2 th row to the pixel points on the 90 th row of the target image acquired from the CWB memory.
As an example, if the data of the pixel points up to the 90 th row are stored, the data of each comparison in Abuffer and Bbuffer are the same. The target image obtained this time is the same as the target image obtained last time. The currently stored target image in the Bbuffer is used as the basis for comparison when the target images are stored to one row in the Abuffer next time.
As can be understood from the embodiment shown in fig. 23, no matter whether the data of the pixel point of the row currently fetched from the CWB memory by the HWC is the same as the data of the pixel point of the corresponding row in the Abuffer, the data of the pixel point of the row currently fetched from the CWB memory needs to be stored in the Bbuffer. Only in the case that "the data of the pixel points in a certain row currently taken out from the CWB memory is equal to the data of the pixel points in the corresponding row in the Abuffer" does not appear, "the" data of the pixel points in a certain row currently taken out from the CWB memory "and" the data of the pixel points in the corresponding row in the Abuffer "need to be compared. Under the condition that the data of the pixel points of a certain row currently taken out from the CWB memory is not equal to the data of the pixel points of the corresponding row in the Abuffer, the data of the pixel points of the row taken out from the CWB memory at each subsequent time does not need to be compared with the data of the pixel points of the corresponding row in the Abuffer.
In the embodiment shown in fig. 23, the target image a is a first target image, the target image B is a second target image, Abuffer is a first storage space, and Bbuffer is a second storage space.
The data of the nth row of pixel points in the target image B is pixel data of a first pixel set, and correspondingly, the nth row of pixel points in the target image B form the first pixel set; the data of the nth row of pixel points in the target image A is pixel data of a second pixel set, and correspondingly, the nth row of pixel points in the target image A form the second pixel set.
The data of the (n + 1) th row of pixel points in the target image B is pixel data of a third pixel set, and correspondingly, the (n + 1) th row of pixel points in the target image B form the third pixel set; the data of the n +1 th row of pixel points in the target image a is the pixel data of the fourth pixel set, and correspondingly, the n +1 th row of pixel points in the target image a forms the fourth pixel set.
If the target image A and the target image B both contain n +1 rows of pixel points, the target image B is stored in the Bbuffer after the data of the n +1 row of pixel points of the target image B is stored. And if the data of the pixel points in the first row to the pixel points in the (n + 1) th row of the target image B are the same as the data of the pixel points in the corresponding row in the target image A, indicating that the target image B is the same as the target image A.
Certainly, in practical applications, the data of two rows of pixel points continuously acquired from the CWB memory may be recorded as the pixel data of the first pixel set and the pixel data of the third pixel set, respectively, where the data of the first row of pixel points is the pixel data of the first pixel set, and the data of the second row of pixel points is the pixel data of the third pixel set. And the pixel points in the third pixel set and the pixel points in the first pixel set are two adjacent lines of pixel points in the target area.
In the embodiment of the present application, 5 pixels and 90 pixels are used as an example of a row of pixels. In practical applications, the number of the pixels included in the first pixel set may be other numbers. The examples of the present application are not intended to be limiting. As mentioned above, it is also possible to obtain the fourth image, and the data of any row of pixel points (denoted as the fifth pixel set) on the third target image obtained by the HWC from the CWB memory can be denoted as the pixel data of the fifth pixel set. Correspondingly, the acquired data of the pixel point (marked as a sixth pixel set) on the same line with the pixel point of the fifth pixel set on the second target image is marked as the pixel data of the sixth pixel set. Of course, the following processes may refer to the description in the above embodiments, and are not described herein again.
It should be noted that the data of each row of pixels is not a value, but the RGB values of 90 pixels in a row. Therefore, it is necessary to determine whether the data of the pixel points in the nth row of the target image a stored by the Abuffer is the same as the data of the pixel points in the nth row of the target image B stored by the Bbuffer in the following manner. Of course, in practical applications, any one of the following modes may be adopted, and other modes not shown in the embodiments of the present application may also be adopted.
The first method comprises the following steps: and calculating a Cyclic Redundancy Check (CRC) value of the data of the pixel points on the nth row of the target image stored in the Abuffer, and calculating a CRC value of the data of the pixel points on the nth row of the target image stored in the Bbuffer.
And if the CRC value of the data of the pixel point at the nth row of the target image stored in the Abuffer is the same as the CRC value of the data of the pixel point at the nth row of the target image stored in the Bbuffer, indicating that the data of the pixel point at the nth row of the target image stored in the Abuffer is the same as the data of the pixel point at the nth row of the target image stored in the Bbuffer.
If the CRC value of the data of the n-th row of pixel points of the target image stored in the Abuffer is different from the CRC value of the data of the n-th row of pixel points of the target image stored in the Bbuffer, it is indicated that the data of the n-th row of pixel points of the target image stored in the Abuffer is different from the data of the n-th row of pixel points of the target image stored in the Bbuffer.
And the second method comprises the following steps: and calculating the hash value of the data of the pixel point at the nth row of the target image stored in the Abuffer, and calculating the hash value of the data of the pixel point at the nth row of the target image stored in the Bbuffer.
If the hash value of the data of the n-th row of pixel points of the target image stored in the Abuffer is the same as the hash value of the data of the n-th row of pixel points of the target image stored in the Bbuffer, it indicates that the data of the n-th row of pixel points of the target image stored in the Abuffer is the same as the data of the n-th row of pixel points of the target image stored in the Bbuffer.
If the hash value of the data of the n-th row of pixel points of the target image stored in the Abuffer is different from the hash value of the data of the n-th row of pixel points of the target image stored in the Bbuffer, it is indicated that the data of the n-th row of pixel points of the target image stored in the Abuffer is different from the data of the n-th row of pixel points of the target image stored in the Bbuffer.
In the embodiment of the application, after the HWC obtains the data of the n-th row of pixel points of the target image from the CWB memory, the data of the n-th row of pixel points may be stored into the buffer, and then the data of the pixel points in the row may be compared with the data of the pixel points in the row in another buffer. Or comparing the data of the pixel point in the row with the data of the pixel point in the row in another buffer, and storing the data of the pixel point in the row into the buffer. The embodiment of the present application does not limit the specific manner.
In this embodiment, the Abuffer and the Bbuffer may be different storage areas of a memory of the electronic device. In specific implementation, two cache spaces capable of containing data of pixel points in a target image can be divided from a memory of the electronic device according to the size of the target image, and the two cache spaces are respectively used as an aboffer and a Bbuffer. The embodiment of the application does not limit the specific positions and sizes of the Abuffer and the Bbuffer.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided for each function, or two or more functions may be integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation. The following description will take the example of dividing each functional unit corresponding to each function:
the electronic device includes:
the brightness value obtaining unit is used for responding to the monitored change of the brightness of the display screen of the electronic equipment and obtaining the brightness value of the display screen;
an image obtaining unit for receiving a first image;
the matting unit is used for acquiring a first target image from the first image, wherein the first target image is an image in a first area;
a noise calculation unit for calculating and obtaining a first image noise based on the brightness value and the first target image;
an image obtaining unit, further configured to receive a second image;
a judging unit, configured to judge whether a second target image on the second image is the same as the first target image, where the second target image is an image in the first region;
and the noise calculation unit is used for calculating and obtaining second image noise based on the brightness value and the second target image if the second target image is different from the first target image.
In this embodiment of the present application, the electronic device may further include other units, and each unit in the electronic device assists in implementing the noise monitoring method provided in any of the above embodiments. The roles of the other units and other units will not be described here.
It should be noted that, because the content of the above-mentioned execution process and the like is based on the same concept as the embodiment of the method of the present application, specific functions and technical effects thereof can be referred to specifically in the embodiment of the method, and are not described herein again.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps in the foregoing method embodiments may be implemented.
Embodiments of the present application further provide a computer program product, which when run on an electronic device, enables the electronic device to implement the steps in the above method embodiments.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer readable storage medium and used by a processor to implement the steps of the embodiments of the methods described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or apparatus capable of carrying computer program code to a first device, including recording media, computer Memory, Read-Only Memory (ROM), Random-Access Memory (RAM), electrical carrier signals, telecommunications signals, and software distribution media. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
An embodiment of the present application further provides a chip system, where the chip system includes a processor, the processor is coupled to the memory, and the processor executes a computer program stored in the memory to implement the steps of any of the method embodiments of the present application. The chip system may be a single chip or a chip module composed of a plurality of chips.
In the above embodiments, the description of each embodiment has its own emphasis, and reference may be made to the related description of other embodiments for parts that are not described or recited in any embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (16)

1. A method for monitoring noise, applied to an electronic device, includes:
responding to the monitored change of the brightness of the display screen of the electronic equipment, and acquiring the brightness value of the display screen;
receiving a first image, wherein the first image is an image when the brightness of the display screen is changed and is refreshed for the first time;
acquiring a first target image from the first image, wherein the first target image is an image in a first area;
calculating to obtain a first image noise based on the brightness value and the first target image;
receiving a second image, wherein the second image is an image obtained when the brightness of the display screen is changed and is refreshed for the second time;
judging whether a second target image on the second image is the same as the first target image, wherein the second target image is an image in the first area;
and if the second target image is different from the first target image, calculating to obtain second image noise based on the brightness value and the second target image.
2. The method of claim 1, wherein after determining whether the second target image on the second image and the first target image are the same, the method further comprises:
and if the second target image is the same as the first target image, stopping calculating and obtaining the second image noise based on the brightness value and the second target image.
3. The method of claim 1, wherein said determining whether the second target image on the second image and the first target image are the same comprises:
storing a third image on the second image that includes the second target image in a write-back memory;
and judging whether the second target image and the first target image on the third image stored in the write-back memory are the same or not.
4. The method of claim 3, wherein the first target image is stored in a first memory space;
the determining whether the second target image and the first target image on the third image stored in the write-back memory are the same includes:
acquiring pixel data of a first pixel set from the third image stored in the write-back memory, wherein the first pixel set is a set formed by at least one pixel point in a first area on the third image;
storing pixel data of the first set of pixels in a second storage space;
acquiring pixel data of a second pixel set from the pixel data stored in the first storage space, wherein the storage position of the pixel data of the second pixel set in the first storage space is the same as the storage position of the pixel data of the first pixel set in the second storage space;
judging whether the pixel data of the first pixel set and the pixel data of the second pixel set are the same or not;
and if the pixel data of the first pixel set is different from the pixel data of the second pixel set, determining that the second target image is different from the first target image.
5. The method of claim 4, wherein if the pixel data of the first set of pixels and the pixel data of the second set of pixels are not the same, the method further comprises:
acquiring pixel data of a third pixel set from the third image stored in the write-back memory, wherein the third pixel set is a set formed by at least one pixel point in a first region on the third image, and the pixel points in the third pixel set and the pixel points in the first pixel set are two adjacent lines of pixel points in the first region;
storing pixel data of the third set of pixels in the second storage space.
6. The method of claim 5, wherein storing pixel data for the third set of pixels after the second storage space, the method further comprises:
and storing the pixel data of each pixel point in the first area on the third image in the second storage space.
7. The method of claim 4, wherein determining whether the pixel data of the first set of pixels and the pixel data of the second set of pixels are the same further comprises:
if the pixel data of the first pixel set is the same as the pixel data of the second pixel set, acquiring the pixel data of a third pixel set from the third image stored in the write-back memory, wherein the third pixel set is a set formed by at least one pixel point in a first area on the third image, and the pixel points in the third pixel set and the pixel points in the first pixel set are two adjacent lines of pixel points in the first area;
storing pixel data of the third set of pixels in the second storage space;
acquiring pixel data of a fourth pixel set from the pixel data stored in the first storage space, wherein the storage position of the pixel data of the fourth pixel set in the first storage space is the same as the storage position of the pixel data of the third pixel set in the second storage space;
judging whether the pixel data of the third pixel set and the pixel data of the fourth pixel set are the same or not;
and if the pixel data of the third pixel set is the same as the pixel data of the fourth pixel set and the pixel data of the third pixel set is the pixel data corresponding to the pixel point in the last row of the first area, determining that the second target image is the same as the first target image.
8. The method of claim 6, wherein after receiving the second image, the method further comprises:
receiving a fourth image;
judging whether a third target image on the fourth image is the same as the second target image, wherein the third target image is an image in the first area;
if the third target image is different from the second target image, calculating to obtain a third image noise based on the brightness value and the third target image;
and if the third target image is the same as the second target image, stopping calculating and obtaining the third image noise based on the brightness value and the third target image.
9. The method of claim 8, wherein the determining whether the third target image and the second target image on the fourth image are the same comprises:
storing a fifth image comprising a third target image on the fourth image in the write-back memory;
acquiring pixel data of a fifth pixel set from the fifth image, wherein the fifth pixel set is a set formed by at least one pixel point in a first region on the fifth image;
storing pixel data of the fifth set of pixels in the first storage space;
acquiring pixel data of a sixth pixel set from the pixel data stored in the second storage space, wherein the storage position of the pixel data of the sixth pixel set in the second storage space is the same as the storage position of the pixel data of the fifth pixel set in the first storage space;
judging whether the pixel data of the fifth pixel set is the same as the pixel data of the sixth pixel set;
and if the pixel data of the fifth pixel set is different from the pixel data of the sixth pixel set, determining that the third target image is different from the second target image.
10. The method of any of claims 4 to 9, wherein the determining whether the pixel data of the first set of pixels and the pixel data of the second set of pixels are the same comprises:
calculating a first cyclic redundancy check value of pixel data of the first set of pixels and a second cyclic redundancy check value of pixel data of the second set of pixels;
judging whether the first cyclic redundancy check value and the second cyclic redundancy check value are equal or not;
if the first cyclic redundancy check value and the second cyclic redundancy check value are equal, the pixel data of the first pixel set is the same as the pixel data of the second pixel set;
and if the first cyclic redundancy check value is not equal to the second cyclic redundancy check value, the pixel data of the first pixel set is different from the pixel data of the second pixel set.
11. The method of any of claims 1-9, wherein the first area is an area on a display screen of the electronic device that is above an ambient light sensor of the electronic device.
12. The method of any of claims 1-9, wherein the electronic device comprises: a first processor, the method comprising:
in response to monitoring that the brightness of the display screen of the electronic device changes, the first processor acquires the brightness value of the display screen through the HWC module of the electronic device;
the first processor receiving, by the HWC module, a first image;
the first processor acquires a first target image from the first image through the HWC module, wherein the first target image is an image in a first area;
the first processor sending, by the HWC module, the first target image to a noise algorithm library of the electronic device;
the first processor obtains first image noise through calculation of the noise algorithm library based on the brightness value and the first target image;
the first processor receiving a second image through the HWC module;
the first processor determines, by the HWC module, whether a second target image on the second image is the same as the first target image, the second target image being an image within the first region;
if the second target image is the same as the first target image, the first processor stopping sending the second target image to the noise algorithm library through the HWC module;
if the second target image is not the same as the first target image, the first processor sends the second target image to the noise algorithm library through the HWC module;
the first processor obtains second image noise through calculation of the noise algorithm library based on the brightness value and the second target image.
13. The method of claim 12, wherein the first processor, after receiving the second image by the HWC module, further comprises:
the first processor sending the second image to a display subsystem of the electronic device through the HWC module;
the first processor stores a third image comprising a second target image on the second image in a write-back memory of the electronic equipment through the display subsystem;
accordingly, the first processor determines, by the HWC module, whether a second target image on the second image is the same as the first target image;
the first processor determines whether the second target image and the first target image on the third image stored in the write-back memory are the same through the HWC module.
14. An electronic device, characterized in that the electronic device comprises a first processor, a display screen and an ambient light sensor, the ambient light sensor being located below the display screen, the first processor being configured to execute a computer program stored in a memory to cause the electronic device to implement the method of any of claims 1 to 13.
15. A chip system comprising a first processor coupled to a memory, the first processor executing a computer program stored in the memory to implement the method of any of claims 1 to 13.
16. A computer-readable storage medium, in which a computer program is stored which, when run on a processor, implements the method of any one of claims 1 to 13.
CN202110657922.XA 2021-06-11 2021-06-11 Noise monitoring method, electronic equipment, chip system and storage medium Active CN113837990B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110657922.XA CN113837990B (en) 2021-06-11 2021-06-11 Noise monitoring method, electronic equipment, chip system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110657922.XA CN113837990B (en) 2021-06-11 2021-06-11 Noise monitoring method, electronic equipment, chip system and storage medium

Publications (2)

Publication Number Publication Date
CN113837990A CN113837990A (en) 2021-12-24
CN113837990B true CN113837990B (en) 2022-09-30

Family

ID=78962658

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110657922.XA Active CN113837990B (en) 2021-06-11 2021-06-11 Noise monitoring method, electronic equipment, chip system and storage medium

Country Status (1)

Country Link
CN (1) CN113837990B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754954A (en) * 2020-07-10 2020-10-09 Oppo(重庆)智能科技有限公司 Screen brightness adjusting method and device, storage medium and electronic equipment
CN112541861A (en) * 2019-09-23 2021-03-23 华为技术有限公司 Image processing method, device, equipment and computer storage medium
CN112700377A (en) * 2019-10-23 2021-04-23 华为技术有限公司 Image floodlight processing method and device and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012138043A (en) * 2010-12-28 2012-07-19 Jvc Kenwood Corp Image noise removal method and image noise removal device
CN103916935A (en) * 2012-12-31 2014-07-09 赛龙通信技术(深圳)有限公司 Backlight control system and control method of mobile terminal
CN104809347B (en) * 2015-04-28 2018-08-24 南京巨鲨显示科技有限公司 A kind of implementation method that control display outburst area is shown
CN106847150B (en) * 2017-01-04 2020-11-13 捷开通讯(深圳)有限公司 Device and method for adjusting brightness of display screen
CN107966209B (en) * 2017-11-22 2020-07-07 Oppo广东移动通信有限公司 Ambient light detection method, ambient light detection device, storage medium, and electronic apparatus
CN110769151B (en) * 2019-09-27 2021-10-15 维沃移动通信有限公司 Image processing method, image processing apparatus, electronic device, and medium
CN111371965B (en) * 2020-03-23 2021-09-28 华兴源创(成都)科技有限公司 Image processing method, device and equipment for eliminating interference in brightness shooting of display screen
CN112565915B (en) * 2020-06-04 2023-05-05 海信视像科技股份有限公司 Display apparatus and display method
CN111667800B (en) * 2020-06-16 2021-11-23 广州视源电子科技股份有限公司 Image display parameter adjusting method and device, storage medium and terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112541861A (en) * 2019-09-23 2021-03-23 华为技术有限公司 Image processing method, device, equipment and computer storage medium
CN112700377A (en) * 2019-10-23 2021-04-23 华为技术有限公司 Image floodlight processing method and device and storage medium
CN111754954A (en) * 2020-07-10 2020-10-09 Oppo(重庆)智能科技有限公司 Screen brightness adjusting method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN113837990A (en) 2021-12-24

Similar Documents

Publication Publication Date Title
CN113411528B (en) Video frame rate control method, terminal and storage medium
CN113810601B (en) Terminal image processing method and device and terminal equipment
WO2022007862A1 (en) Image processing method, system, electronic device and computer readable storage medium
CN113810603B (en) Point light source image detection method and electronic equipment
CN113797530B (en) Image prediction method, electronic device and storage medium
CN113542580B (en) Method and device for removing light spots of glasses and electronic equipment
CN113625860B (en) Mode switching method and device, electronic equipment and chip system
WO2022100685A1 (en) Drawing command processing method and related device therefor
CN113804290B (en) Ambient light detection method, electronic device and chip system
CN114095666A (en) Photographing method, electronic device and computer-readable storage medium
CN114257920B (en) Audio playing method and system and electronic equipment
CN113542574A (en) Shooting preview method under zooming, terminal, storage medium and electronic equipment
CN113923351B (en) Method, device and storage medium for exiting multi-channel video shooting
CN113837990B (en) Noise monitoring method, electronic equipment, chip system and storage medium
CN113596320B (en) Video shooting variable speed recording method, device and storage medium
WO2022033344A1 (en) Video stabilization method, and terminal device and computer-readable storage medium
CN113820008B (en) Ambient light detection method, electronic device and chip system
CN113808030B (en) Noise monitoring method, electronic equipment and chip system
CN113852755A (en) Photographing method, photographing apparatus, computer-readable storage medium, and program product
CN113391735A (en) Display form adjusting method and device, electronic equipment and storage medium
CN113672454B (en) Screen freezing monitoring method, electronic equipment and computer readable storage medium
WO2023207862A1 (en) Method and apparatus for determining head posture
CN115931115A (en) Detection method of ambient light, electronic equipment, chip system and storage medium
CN116449942A (en) Axial matching method and related equipment thereof
CN115712368A (en) Volume display method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant