CN111767016A - Display processing method and device - Google Patents

Display processing method and device Download PDF

Info

Publication number
CN111767016A
CN111767016A CN202010605542.7A CN202010605542A CN111767016A CN 111767016 A CN111767016 A CN 111767016A CN 202010605542 A CN202010605542 A CN 202010605542A CN 111767016 A CN111767016 A CN 111767016A
Authority
CN
China
Prior art keywords
gray scale
target
scale calibration
picture
calibration table
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010605542.7A
Other languages
Chinese (zh)
Other versions
CN111767016B (en
Inventor
张健民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010605542.7A priority Critical patent/CN111767016B/en
Publication of CN111767016A publication Critical patent/CN111767016A/en
Priority to PCT/CN2021/093241 priority patent/WO2022001383A1/en
Application granted granted Critical
Publication of CN111767016B publication Critical patent/CN111767016B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The application discloses a display processing method and a display processing device, which are applied to electronic equipment, wherein the method comprises the following steps: predicting a target driving current required by displaying a picture to be displayed; determining a target gray scale calibration table based on the target drive current; compensating the gray scale signals included in the picture to be displayed based on the target gray scale calibration table to obtain a compensated picture; and displaying the compensated picture. By adopting the method and the device, the probability of display distortion of the picture to be displayed can be reduced.

Description

Display processing method and device
Technical Field
The present application relates to the field of electronic technologies, and in particular, to a display processing method and apparatus.
Background
An Active Matrix Organic Light Emitting Diode (AMOLED) screen has the advantages of fast response speed, self-luminescence, and low power consumption, and has become the mainstream display technology of the current electronic devices. The AMOLED screen self-luminous characteristic is that the AMOLED screen brightness or color is realized by independently controlling R (red)/G (green)/B (basket) sub-pixel current. The sub-pixel current is influenced by the display driving voltage of the AMOLED screen, the larger the display driving voltage of the AMOLED screen is, the larger the sub-pixel current is, the higher the brightness is, and otherwise, the smaller the brightness is.
The AMOLED screen display driving voltage is provided by a Direct current to Direct current converter (DCDC) chip, and signals are connected to the display panel through a Flexible Printed Circuit (FPC). In the process of signal transmission, due to the existence of the wiring resistance, obvious voltage drop exists, and the existence of the voltage drop can cause the display distortion problem of the picture.
Disclosure of Invention
The embodiment of the application provides a display processing method and device.
In a first aspect, an embodiment of the present application provides a display processing method, which is applied to an electronic device, and the method includes:
predicting a target driving current required by displaying a picture to be displayed;
determining a target gray scale calibration table based on the target drive current;
calibrating the picture to be displayed based on the target gray scale calibration table to obtain a calibrated picture;
and displaying the calibrated picture.
In a second aspect, an embodiment of the present application provides a display processing apparatus, which is applied to an electronic device, and the apparatus includes:
the prediction unit is used for predicting a target driving current required by displaying a picture to be displayed;
a determination unit for determining a target gray scale calibration table based on the target driving current;
the calibration unit is used for calibrating the picture to be displayed based on the target gray scale calibration table to obtain a calibrated picture;
and the display unit is used for displaying the calibrated picture.
In a third aspect, an embodiment of the present application provides an electronic device, where the electronic device includes a processor and a display screen, where:
the processor is used for predicting a target driving current required by displaying a picture to be displayed; determining a target gray scale calibration table based on the target drive current; calibrating the picture to be displayed based on the target gray scale calibration table to obtain a calibrated picture;
and the display screen is used for displaying the calibrated picture.
In a fourth aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing the steps in any of the methods of the first aspect of the embodiment of the present application.
In a fifth aspect, this application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform some or all of the steps described in any one of the methods of the first aspect of this application.
In a sixth aspect, the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps as described in any one of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
It can be seen that, in the embodiment of the application, the target driving current required for displaying the picture to be displayed is predicted, then the target gray scale calibration table is determined based on the target driving current, then the picture to be displayed is calibrated based on the target gray scale calibration table to obtain the calibrated picture, and finally the calibrated picture is displayed, so that the dynamic real-time compensation and calibration of the picture to be displayed are realized, and the probability of display distortion of the picture to be displayed is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a software structure of an electronic device according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a display processing method according to an embodiment of the present application;
FIG. 4 is a histogram of gray scale distribution according to an embodiment of the present application;
fig. 5 is a schematic diagram of a corresponding relationship between current density and brightness provided in an embodiment of the present application;
FIG. 6 shows the area and S of a sub-pixel according to an embodiment of the present applicationR、SGAnd SBA schematic diagram of the relationship of (1);
FIG. 7 is a schematic illustration provided by an embodiment of the present application;
FIG. 8 is another schematic illustration provided by an embodiment of the present application;
fig. 9 is a schematic structural diagram of a display processing apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
The electronic device may be a portable electronic device, such as a cell phone, a tablet computer, a wearable electronic device with wireless communication capabilities (e.g., a smart watch), etc., that also contains other functionality, such as personal digital assistant and/or music player functionality. Exemplary embodiments of the portable electronic device include, but are not limited to, portable electronic devices that carry an IOS system, an Android system, a Microsoft system, or other operating system. The portable electronic device may also be other portable electronic devices such as a Laptop computer (Laptop) or the like. It should also be understood that in other embodiments, the electronic device may not be a portable electronic device, but may be a desktop computer.
In a first section, the software and hardware operating environment of the technical solution disclosed in the present application is described as follows.
Fig. 1 shows a schematic structural diagram of an electronic device 100. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a compass 190, a motor 191, a pointer 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. Wherein the different processing units may be separate components or may be integrated in one or more processors. In some embodiments, the electronic device 101 may also include one or more processors 110. The controller can generate an operation control signal according to the instruction operation code and the time sequence signal to complete the control of instruction fetching and instruction execution. In other embodiments, a memory may also be provided in processor 110 for storing instructions and data. Illustratively, the memory in the processor 110 may be a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. This avoids repeated accesses and reduces the latency of the processor 110, thereby increasing the efficiency with which the electronic device 101 processes data or executes instructions.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an inter-integrated circuit (I2C) interface, an inter-integrated circuit audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a SIM card interface, a USB interface, and/or the like. The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 101, and may also be used to transmit data between the electronic device 101 and peripheral devices. The USB interface 130 may also be used to connect to a headset to play audio through the headset.
It should be understood that the interface connection relationship between the modules illustrated in the embodiments of the present application is only an illustration, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (such as wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), UWB, and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, videos, and the like. The display screen 194 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active matrix organic light-emitting diode (active-matrix organic light-emitting diode (AMOLED)), a flexible light-emitting diode (FLED), a mini light-emitting diode (mini-light-emitting diode (mini), a Micro-o led, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the electronic device 100 may include 1 or more display screens 194.
The electronic device 100 may implement a photographing function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or more cameras 193.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
Internal memory 121 may be used to store one or more computer programs, including instructions. The processor 110 may execute the above-mentioned instructions stored in the internal memory 121, so as to enable the electronic device 101 to execute the method for displaying page elements provided in some embodiments of the present application, and various applications and data processing. The internal memory 121 may include a program storage area and a data storage area. Wherein, the storage program area can store an operating system; the storage program area may also store one or more applications (e.g., gallery, contacts, etc.), and the like. The storage data area may store data (such as photos, contacts, etc.) created during use of the electronic device 101, and the like. Further, the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic disk storage components, flash memory components, Universal Flash Storage (UFS), and the like. In some embodiments, the processor 110 may cause the electronic device 101 to execute the method for displaying page elements provided in the embodiments of the present application, and other applications and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor 110. The electronic device 100 may implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor, etc. Such as music playing, recording, etc.
The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., X, Y and the Z axis) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
In the embodiment of the present application, the processor 110 is configured to predict a target driving current required for displaying a to-be-displayed picture; determining a target gray scale calibration table based on the target drive current; calibrating the picture to be displayed based on the target gray scale calibration table to obtain a calibrated picture;
and the display screen 194 is used for displaying the calibrated picture.
It should be noted that the processor 110 and the display screen 194 may also be used for other processes of the techniques described herein, as described with particular reference to the methods described below.
Fig. 2 shows a block diagram of a software structure of the electronic device 100. The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom. The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), media libraries (media libraries), three-dimensional graphics processing libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
In the second section, the scope of protection of the claims disclosed in the embodiments of the present application is described below.
Referring to fig. 3, fig. 3 is a flowchart illustrating a display processing method applied to an electronic device according to an embodiment of the present application.
Step 301: a target driving current required for displaying a picture to be displayed is predicted.
Step 302: a target gray scale calibration table is determined based on the target drive current.
Step 303: and calibrating the picture to be displayed based on the target gray scale calibration table to obtain a calibrated picture.
Step 304: and displaying the calibrated picture.
The picture to be displayed is a picture to be displayed on a screen of the electronic device, for example, a certain picture, an interface of a certain application, and the like.
It should be noted that the execution subject of the embodiment of the present application is at least one of an AP, a Display Driver IC (DDIC), and a plug-in display processing chip (such as a Digital Signal Processing (DSP) chip) of an electronic device.
In an implementation manner of the present application, the predicting a target driving current required for displaying a to-be-displayed picture includes:
acquiring a gray scale distribution histogram of the picture to be displayed;
predicting the target driving current based on the grayscale distribution histogram.
Optionally, the grayscale distribution histogram is used to count occurrence probabilities of N grayscale signals, where N is an integer greater than 1. If N is 256, the histogram of gray scale distribution is used to count the probability of occurrence of 256 (e.g., 0-255) gray scale signals.
For example, if a 480 × 240 frame to be displayed is obtained, the distribution of the gray-scale signals of each RGB pixel (total number of pixels is 480 × 240 — 115200) is decomposed to obtain a gray-scale distribution histogram of the frame to be displayed, as shown in fig. 4, the abscissa of fig. 4 represents the gray-scale signals, and the ordinate of fig. 4 represents the occurrence probability.
Optionally, the gray scale distribution histogram may be implemented by an AP of the electronic device, may also be implemented by a DDIC, may also be implemented by a plug-in display processing chip, and the like.
Optionally, the predicting the target driving current based on the gray-scale distribution histogram includes:
predicting the target driving current based on the gray scale distribution histogram and a first formula;
wherein the first formula is:
Figure BDA0002560978450000081
wherein I is a driving current, and I isiRIs the current density of the red R sub-pixel corresponding to the ith gray scale signal, InGIs the current density of the green G sub-pixel corresponding to the ith gray scale signal, InBIs the current density, T, of the blue B sub-pixel corresponding to the ith gray scale signalRIs the aperture ratio of R sub-pixel, TGIs the aperture ratio of the G sub-pixel, TBThe aperture ratio of the sub-pixel B, the area of the sub-pixel S and the probability of the ith gray scale signal P are shown.
Specifically, referring to fig. 5, fig. 5 is a schematic diagram of a corresponding relationship between current density and brightness, since the gray level of the gray level signal refers to the brightness of the brightness, the brightness can be known by the gray level, and then I can be known by fig. 5iR、InGAnd InB,TR=SR/S,TG=SG/S,TB=SB/S,SRIs the evaporation area, S, of the luminescent material corresponding to the R sub-pixelGIs the evaporation area, S, of the luminescent material corresponding to the G sub-pixelBThe evaporation area of the luminescent material corresponding to the sub-pixel B, the area of the sub-pixel and the SR、SGAnd SBThe relationship of (A) is shown in FIG. 6, and T is shownR、TG、TBAnd S can be known by screen hardware parameters of the electronic equipment (namely, a fixed value), and P can be known by a gray scale distribution histogram, so that the target driving current can be calculated by the first formula.
In an implementation manner of the present application, the determining a target gray scale calibration table based on the target driving current includes:
and if a first gray scale calibration table corresponding to the target driving current is stored in the electronic equipment, determining the first gray scale calibration table as the target gray scale calibration table.
Optionally, the method further comprises:
if the first gray scale calibration table is not stored in the electronic equipment, determining the target gray scale calibration table based on a second gray scale calibration table and a third gray scale calibration table;
the electronic device is provided with M gray scale calibration tables, wherein M is an integer of 1, the M gray scale calibration tables comprise a first gray scale calibration table and a second gray scale calibration table, the M gray scale calibration tables correspond to M driving currents one by one, the second gray scale calibration table corresponds to a first driving current, the third gray scale calibration table corresponds to a second driving current, and the first driving current and the second driving current are driving currents adjacent to the target driving current in the M driving currents.
Optionally, the second gray scale calibration table includes N first gray scale calibration value sets corresponding to the N gray scale signals one to one, and the second gray scale calibration table includes N second gray scale calibration value sets corresponding to the N gray scale signals one to one; the determining the target gray scale calibration table based on a second gray scale calibration table and a third gray scale calibration table stored in the electronic device includes:
determining N third gray scale calibration value sets corresponding to the N gray scale signals one by one based on the N first gray scale calibration value sets and the N second gray scale calibration value sets;
generating the target gray scale calibration table based on the N gray scale signals and the N third gray scale calibration value sets.
Optionally, each gray scale calibration value set includes 3 gray scale calibration values, and the 3 gray scale calibration values correspond to 3 sub-pixels (i.e., RGB sub-pixels) one to one; determining, based on the N first grayscale calibration value sets and the N second grayscale calibration value sets, N third grayscale calibration value sets that correspond one-to-one to the N grayscale signals, including:
determining the N third gray scale calibration value sets based on the N first gray scale calibration value sets, the N second gray scale calibration value sets, a second formula, a third formula, and a fourth formula.
Wherein the second formula is:
Xi3R=K×(Xi1R+Xi2R);
said Xi3RThe gray scale calibration value corresponding to the R sub-pixel included in the third gray scale calibration value set corresponding to the ith gray scale signal is Xi1RThe gray scale calibration value corresponding to the R sub-pixel included in the first gray scale calibration value set corresponding to the ith gray scale signal is the X gray scale calibration valuei2RAnd the K is a positive number smaller than 1, and is a gray scale calibration value corresponding to the R sub-pixel included in the second gray scale calibration value set corresponding to the ith gray scale signal.
Wherein the third formula is:
Xi3G=K×(Xi1G+Xi2G);
said Xi3GThe gray scale calibration value corresponding to the G sub-pixel included in the third gray scale calibration value set corresponding to the ith gray scale signal is Xi1GThe gray scale calibration value corresponding to the G sub-pixel included in the first gray scale calibration value set corresponding to the ith gray scale signal is Xi2GAnd the gray scale calibration value corresponding to the G sub-pixel is included in the second gray scale calibration value set corresponding to the ith gray scale signal.
Wherein the fourth formula is:
Xi3B=K×(Xi1B+Xi2B);
said Xi3BThe gray scale calibration value corresponding to the B sub-pixel included in the third gray scale calibration value set corresponding to the ith gray scale signal is Xi1BThe gray scale calibration value corresponding to the B sub-pixel included in the first gray scale calibration value set corresponding to the ith gray scale signal is Xi2BAnd the gray scale calibration value corresponding to the B sub-pixel is included in the second gray scale calibration value set corresponding to the ith gray scale signal.
Where K may be a fixed value such as 1/2. Alternatively, K is determined based on the first drive current, the second drive current and the target drive current, specifically, K ═ I3/(I1+I2),I1And I2For the drive current, I, corresponding to the gray scale calibration table3The driving current required for displaying the picture to be displayed.
For example, assuming that the second gray scale calibration table is shown in table 1, the third gray scale calibration table is shown in table 2, and K is 1/2, the resulting target gray scale calibration table is shown in table 3.
TABLE 1
Figure BDA0002560978450000091
TABLE 2
Figure BDA0002560978450000101
TABLE 3
Figure BDA0002560978450000102
Optionally, after the generating the target gray scale calibration table based on the N gray scale signals and the N third gray scale calibration value sets, the method further includes: and storing the target gray scale calibration table and the target driving current in an associated manner.
In an implementation manner of the present application, the calibrating the to-be-displayed picture based on the target gray scale calibration table to obtain a calibrated picture includes:
and performing gray scale calibration on the gray scale signals included in the picture to be displayed based on the gray scale calibration values corresponding to the N gray scale signals included in the target gray scale calibration table to obtain a calibrated picture.
Optionally, the performing gray scale calibration on the gray scale signal included in the to-be-displayed picture based on the gray scale calibration value corresponding to the N gray scale signals included in the target gray scale calibration table to obtain a calibrated picture includes:
determining a gray scale calibration value corresponding to each gray scale signal included in the picture to be displayed based on the target gray scale calibration table;
and updating the gray scale value of each gray scale signal included in the picture to be displayed into the corresponding gray scale calibration value to obtain the calibrated picture.
For example, assuming that the target gray scale calibration table is shown in Table 3, if the frame to be displayed includes 256 gray scale signals (e.g., 0-255), the gray scale values of the 256 gray scale signals are 0, 1, 2, 3, …, 254, 255, the gray scale value of the gray scale signal 0 of the R sub-pixel in the frame to be displayed can be updated from 0 to 1, the gray scale value of the gray scale signal 1 is updated from 1 to 3, the gray scale value of the gray scale signal 2 is updated from 2 to 5, …, the gray scale value of the gray scale signal 124 is updated from 124 to 127, the gray scale value of the gray scale signal 125 is updated from 125 to 128, the gray scale value of the gray scale signal 126 is updated from 126 to 130, the gray scale value of the gray scale signal 127 is updated from 127 to 131, …, the gray scale value of the gray scale signal 252 is updated from 252, the gray scale value of the gray scale signal 253 is updated from 253 to 254, the gray scale value of the gray scale signal 254 is updated 255, and the gray scale value of the gray scale signal 255 is updated from 255 to 255, by analogy, the gray scale value of the gray scale signal included in the calibrated picture can be obtained, which is specifically shown in table 4.
TABLE 4
Figure BDA0002560978450000111
In an implementation manner of the present application, the electronic device stores the first gray scale calibration table, the first gray scale calibration table is determined based on a first data group and a second data group, a driving current corresponding to the first data group is 0, and a driving current corresponding to the second data group is the target driving current.
Specifically, as shown in fig. 7, the background is Gray0 (specifically, a picture of Gray0 can be placed below the screen), R/G/B sub-pixels in a set area of the screen (for example, the size of the set area is 10% of the screen, and the center point is the center point of the screen) are changed from Gray0 to 255, and the brightness at the center of the screen is measured when the change occurs, so as to obtain 256 brightnesses corresponding to 256 Gray groups, respectively (i.e., obtain a first data group, as shown in table 5). Since the background is Gray0 (i.e., the background is all black), the driving current can be considered to be zero at this time, and thus is equivalent to a scene with no voltage drop.
As shown in fig. 8, if the background corresponding to the target driving current is Gray127 (the relationship between the driving current and the background Gray scale can be obtained by the first formula), the background is kept as Gray127 (specifically, a picture of Gray127 can be placed below the screen), R sub-pixels/G sub-pixels/B sub-pixels in a set area of the screen (for example, the size of the set area is 10% of the screen, and the center point is the screen center point) are changed from Gray0 to 255, and the brightness at the center of the screen is measured during the change, so as to obtain 3 sets of 256 brightness respectively corresponding to 256 Gray (i.e., obtain the second data group, as shown in table 6).
TABLE 5
Figure BDA0002560978450000112
Figure BDA0002560978450000121
TABLE 6
Gray scale signal Gray scale brightness of R sub-pixel Gray scale brightness of G sub-pixel Gray scale brightness of B sub-pixel
0 0 0 0
…… …… …… ……
90 9% 8% 9%
…… …… …… ……
140 23% 23% 24%
…… …… …… ……
Optionally, the first data group includes N first grayscale brightness sets, the N first grayscale brightness sets correspond to N grayscale signals one to one, and the second data group includes N second grayscale brightness sets, the N second grayscale brightness sets correspond to the N grayscale signals one to one;
the first gray scale calibration table is determined based on a first data group and a second data group, and comprises:
the first gray scale calibration table is obtained by correspondingly associating N target gray scale calibration value sets with the N gray scale signals, wherein the N target gray scale calibration value sets correspond to the N gray scale signals one by one;
the N target gray scale calibration value sets are determined based on N third gray scale brightness sets, and the N target gray scale calibration value sets correspond to the N third gray scale brightness sets one by one;
the N third gray scale brightness sets are obtained by respectively compensating the N second gray scale brightness sets on the basis of N gray scale compensation brightness sets, and the N gray scale compensation brightness sets correspond to the N gray scale signals one by one;
each gray scale compensation brightness set is determined based on the first gray scale brightness set and the second gray scale brightness set of the corresponding gray scale signal.
Each first gray scale brightness set comprises 3 first gray scale brightness, and the 3 first gray scale brightness correspond to the 3 sub-pixels (namely, RGB sub-pixels) one by one. Each second gray scale luminance set comprises 3 second gray scale luminances, and the 3 second gray scale luminances correspond to 3 sub-pixels (namely, RGB sub-pixels) one by one. Each target gray scale calibration value set comprises 3 target gray scale calibration values, and the 3 target gray scale calibration values correspond to 3 sub-pixels (namely, RGB sub-pixels) one to one. Each third gray scale luminance includes 3 third gray scale luminances, and the 3 third gray scale luminances correspond to the 3 sub-pixels (i.e., RGB sub-pixels) one to one. Each gray scale compensation brightness set comprises 3 gray scale compensation brightness, and the 3 gray scale compensation brightness correspond to the 3 sub-pixels (namely, RGB sub-pixels) one by one.
Optionally, each gray scale compensation brightness set is determined based on the first gray scale brightness set and the second gray scale brightness set of the corresponding gray scale signal, and includes: each gray scale brightness set is determined based on a first gray scale brightness set and a second gray scale brightness set of the corresponding gray scale signal, and a first brightness compensation formula, a second brightness compensation formula and a third brightness compensation formula.
Wherein, the first brightness compensation formula is: l is2R+L3R=L1RSaid L is2RIs the second gray scale brightness corresponding to the R sub-pixel, L1RIs the first gray scale brightness corresponding to the R sub-pixel, the L3RAnd compensating the brightness for the gray scale corresponding to the R sub-pixel.
The second luminance compensation formula is: l is2G+L3G=L1GSaid L is2GFor the second gray scale luminance corresponding to the G sub-pixel, L1GThe first gray scale brightness corresponding to the G sub-pixel, L3GAnd compensating the brightness for the gray scale corresponding to the G sub-pixel.
The third luminance compensation formula is: l is2B+L3B=L1BSaid L is2BA second gray scale luminance corresponding to the B sub-pixel, L1BThe first gray scale brightness corresponding to the B sub-pixel, L3BAnd compensating the brightness for the gray scale corresponding to the B sub-pixel.
Optionally, the N third grayscale luminance sets are obtained by respectively compensating the N second grayscale luminance sets based on the N grayscale compensation luminance sets, and the method includes:
the N third gray scale brightness sets are determined based on each gray scale compensation brightness set, a second gray scale brightness set corresponding to each gray scale compensation brightness set, a fifth formula, a sixth formula and a seventh formula.
Wherein the fifth formula: l is4R+L5R=L6RSaid L is4RIs the second gray scale brightness corresponding to the R sub-pixel, L5RCompensating the brightness for the gray scale corresponding to the R sub-pixel, L6RThe third gray scale brightness corresponding to the R sub-pixel.
Wherein the sixth formula: l is4G+L5G=L6GSaid L is4GFor the second gray scale luminance corresponding to the G sub-pixel, L5GCompensating brightness for the gray scale corresponding to the G sub-pixel, L6GThe third gray scale brightness corresponding to the G sub-pixel.
Wherein the seventh formula: l is4B+L5B=L6BSaid L is4BA second gray scale luminance corresponding to the B sub-pixel, L5BCompensating the brightness for the gray scale corresponding to the B sub-pixel, L6BAnd the third gray scale brightness corresponding to the B sub-pixel.
Optionally, the N sets of target grayscale calibration values are determined based on N sets of third grayscale luminances, including: the N target gray scale calibration value sets are obtained by performing gray scale value conversion on the third gray scale brightness included in each third gray scale brightness set.
Specifically, the driving current is increased, the voltage drop is increased, and the luminance is reduced, so that the Gray-scale luminance of the second data group is compensated with the first data group as a reference, that is, the Gray-scale luminance of 0 to 255 is consistent with the Gray-scale luminance of the background of Gray0 when the background of Gray127 is obtained by a Gray-scale compensation method, and a Gray-scale calibration table under the background of Gray127 is obtained.
For example, if the Gray-scale luminance of the R sub-pixel of the Gray-scale signal 90 is 10% in the case of the background being Gray127, 10 Gray-scale luminances need to be added, the Gray-scale luminance of the G sub-pixel of the Gray-scale signal 90 is 9% in the case of the background being Gray127, 7 Gray-scale luminances need to be added, and the Gray-scale luminance of the B sub-pixel of the Gray-scale signal 90 is 10% in the case of the background being Gray127, 9 Gray-scale luminances need to be added, as shown in table 5 and table 6 for the second data group; if the Gray-scale luminance of the R sub-pixel of the Gray-scale signal 140 is 25% under the condition that the background is Gray127, 25 Gray-scale luminances need to be added, the Gray-scale luminance of the G sub-pixel of the Gray-scale signal 140 is 24% under the condition that the background is Gray127, 20 Gray-scale luminances need to be added, and the Gray-scale luminance of the B sub-pixel of the Gray-scale signal 140 is 10% under the condition that the background is Gray127, 23 Gray-scale luminances need to be added; the Gray level calibration table with Gray127 as background obtained by analogy is shown in table 7.
TABLE 7
Gray scale signal Gray scale calibration value of R sub-pixel Gray scale calibration value of R sub-pixel Gray scale calibration value of R sub-pixel
0 0 0 0
…… …… …… ……
90 100 97 99
…… …… …… ……
140 165 160 163
…… …… …… ……
It should be noted that the determination method of the second gray scale calibration table and the third gray scale calibration table is the same as the determination method of the first gray scale calibration table, and will not be described here. In addition, the gray scale calibration tables corresponding to other driving currents can be obtained through the determination mode of the first gray scale calibration table and stored in the electronic equipment.
It can be seen that, in the embodiment of the present application, a target driving current required for displaying a to-be-displayed picture is predicted, then a target gray scale calibration table is determined based on the target driving current, then a gray scale signal included in the to-be-displayed picture is compensated based on the target gray scale calibration table to obtain a compensated picture, and finally the compensated picture is displayed, so that the gray scale signal included in the to-be-displayed picture is dynamically compensated and calibrated in real time, and the probability of display distortion of the to-be-displayed picture is reduced.
It will be appreciated that the electronic device, in order to implement the above-described functions, comprises corresponding hardware and/or software modules for performing the respective functions. The present application is capable of being implemented in hardware or a combination of hardware and computer software in conjunction with the exemplary algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, with the embodiment described in connection with the particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In this embodiment, the electronic device may be divided into functional modules according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in the form of hardware. It should be noted that the division of the modules in this embodiment is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
In the case of dividing each functional module with corresponding functions, fig. 9 shows a schematic diagram of a display processing apparatus, as shown in fig. 9, the display processing apparatus 900 is applied to an electronic device, and the display processing apparatus 900 may include: prediction unit 901, determination unit 902, compensation unit 903 and display unit 904.
It is contemplated that unit 901 may be used to enable an electronic device to perform steps 301, etc., described above, and/or other processes for the techniques described herein.
Determination unit 902 may be used to enable the electronic device to perform, among other things, step 302 described above, and/or other processes for the techniques described herein.
Compensation unit 903 may be used to support an electronic device performing steps 303, etc., described above, and/or other processes for the techniques described herein.
Display unit 904 may be used to support an electronic device performing steps 304, etc., described above, and/or other processes for the techniques described herein.
It should be noted that all relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
The electronic device provided by the embodiment is used for executing the display processing method, so that the same effect as the implementation method can be achieved.
In case an integrated unit is employed, the electronic device may comprise a processing module, a storage module and a communication module. The processing module may be configured to control and manage actions of the electronic device, and for example, may be configured to support the electronic device to perform the steps performed by the prediction unit 901, the determination unit 902, the compensation unit 903, and the display unit 904. The memory module may be used to support the electronic device in executing stored program codes and data, etc. The communication module can be used for supporting the communication between the electronic equipment and other equipment.
The processing module may be a processor or a controller. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. A processor may also be a combination of computing functions, e.g., a combination of one or more microprocessors, a Digital Signal Processing (DSP) and a microprocessor, or the like. The storage module may be a memory. The communication module may specifically be a radio frequency circuit, a bluetooth chip, a Wi-Fi chip, or other devices that interact with other electronic devices.
In an embodiment, when the processing module is a processor and the storage module is a memory, the electronic device according to this embodiment may be a device having the structure shown in fig. 1.
The present embodiment also provides a computer storage medium, in which computer instructions are stored, and when the computer instructions are run on an electronic device, the electronic device is caused to execute the above related method steps to implement the display processing method in the above embodiment.
The present embodiment also provides a computer program product, which when running on a computer, causes the computer to execute the relevant steps described above, so as to implement the display processing method in the above embodiment.
In addition, embodiments of the present application also provide an apparatus, which may be specifically a chip, a component or a module, and may include a processor and a memory connected to each other; the memory is used for storing computer execution instructions, and when the device runs, the processor can execute the computer execution instructions stored in the memory, so that the chip can execute the display processing method in the above method embodiments.
The electronic device, the computer storage medium, the computer program product, or the chip provided in this embodiment are all configured to execute the corresponding method provided above, so that the beneficial effects achieved by the electronic device, the computer storage medium, the computer program product, or the chip may refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Through the description of the above embodiments, those skilled in the art will understand that, for convenience and simplicity of description, only the division of the above functional modules is used as an example, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, a module or a unit may be divided into only one logic function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another apparatus, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed to a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (14)

1. A display processing method is applied to an electronic device, and the method comprises the following steps:
predicting a target driving current required by displaying a picture to be displayed;
determining a target gray scale calibration table based on the target drive current;
calibrating the picture to be displayed based on the target gray scale calibration table to obtain a calibrated picture;
and displaying the calibrated picture.
2. The method according to claim 1, wherein predicting the target driving current required for displaying the picture to be displayed comprises:
acquiring a gray scale distribution histogram of the picture to be displayed;
predicting the target driving current based on the grayscale distribution histogram.
3. The method of claim 2, wherein the histogram of gray scale distribution is used to count the probability of occurrence of N gray scale signals, wherein N is an integer greater than 1; the predicting the target driving current based on the gray-scale distribution histogram includes:
predicting the target driving current based on the gray scale distribution histogram and a first formula;
wherein the first formula is:
Figure FDA0002560978440000011
wherein I is a driving current, and I isiRIs the current density of the red R sub-pixel corresponding to the ith gray scale signal, InGIs the current density of the green G sub-pixel corresponding to the ith gray scale signal, InBIs the current density, T, of the blue B sub-pixel corresponding to the ith gray scale signalRIs the aperture ratio of R sub-pixel, TGIs the aperture ratio of the G sub-pixel, TBThe aperture ratio of the sub-pixel B, the area of the sub-pixel S and the probability of the ith gray scale signal P are shown.
4. The method of any of claims 1-3, wherein determining a target gray scale calibration table based on the target drive current comprises:
and if a first gray scale calibration table corresponding to the target driving current is stored in the electronic equipment, determining the first gray scale calibration table as the target gray scale calibration table.
5. The method of claim 4, wherein the first gray scale calibration table is determined based on a first data group and a second data group, the first data group corresponding to a driving current of 0, and the second data group corresponding to a driving current of the target driving current.
6. The method of claim 5, wherein the first data group comprises N first gray scale luminance sets corresponding to N gray scale signals one to one, and the second data group comprises N second gray scale luminance sets corresponding to N gray scale signals one to one;
the first gray scale calibration table is determined based on a first data group and a second data group, and comprises:
the first gray scale calibration table is obtained by correspondingly associating N target gray scale calibration value sets with the N gray scale signals, wherein the N target gray scale calibration value sets correspond to the N gray scale signals one by one;
the N target gray scale calibration value sets are determined based on N third gray scale brightness sets, and the N target gray scale calibration value sets correspond to the N third gray scale brightness sets one by one;
the N third gray scale brightness sets are obtained by respectively compensating the N second gray scale brightness sets on the basis of N gray scale compensation brightness sets, and the N gray scale compensation brightness sets correspond to the N gray scale signals one by one;
each gray scale compensation brightness set is determined based on the first gray scale brightness set and the second gray scale brightness set of the corresponding gray scale signal.
7. The method according to any one of claims 4-6, further comprising:
if the first gray scale calibration table is not stored in the electronic equipment, determining the target gray scale calibration table based on a second gray scale calibration table and a third gray scale calibration table;
the electronic device is provided with M gray scale calibration tables, wherein the M gray scale calibration tables comprise a first gray scale calibration table and a second gray scale calibration table, the M gray scale calibration tables correspond to M driving currents one by one, the second gray scale calibration table corresponds to a first driving current, the third gray scale calibration table corresponds to a second driving current, and the first driving current and the second driving current are driving currents adjacent to the target driving current in the M driving currents.
8. The method of claim 7, wherein the second gray scale calibration table comprises N first gray scale calibration value sets in one-to-one correspondence with the N gray scale signals, and wherein the second gray scale calibration table comprises N second gray scale calibration value sets in one-to-one correspondence with the N gray scale signals; the determining the target gray scale calibration table based on a second gray scale calibration table and a third gray scale calibration table stored in the electronic device includes:
determining N third gray scale calibration value sets corresponding to the N gray scale signals one by one based on the N first gray scale calibration value sets and the N second gray scale calibration value sets;
generating the target gray scale calibration table based on the N gray scale signals and the N third gray scale calibration value sets.
9. The method according to any one of claims 1 to 8, wherein the calibrating the to-be-displayed picture based on the target gray scale calibration table to obtain a calibrated picture comprises:
and performing gray scale calibration on the gray scale signals included in the picture to be displayed based on the gray scale calibration values corresponding to the N gray scale signals included in the target gray scale calibration table to obtain a calibrated picture.
10. The method according to claim 9, wherein the performing gray scale calibration on the gray scale signals included in the to-be-displayed picture based on the gray scale calibration values corresponding to the N gray scale signals included in the target gray scale calibration table to obtain a calibrated picture comprises:
determining a gray scale calibration value corresponding to each gray scale signal included in the picture to be displayed based on the target gray scale calibration table;
and updating the gray scale value of each gray scale signal included in the picture to be displayed into the corresponding gray scale calibration value to obtain the calibrated picture.
11. A display processing apparatus applied to an electronic device, the apparatus comprising:
the prediction unit is used for predicting a target driving current required by displaying a picture to be displayed;
a determination unit for determining a target gray scale calibration table based on the target driving current;
the calibration unit is used for calibrating the picture to be displayed based on the target gray scale calibration table to obtain a calibrated picture;
and the display unit is used for displaying the calibrated picture.
12. An electronic device, comprising a processor and a display, wherein:
the processor is used for predicting a target driving current required by displaying a picture to be displayed; determining a target gray scale calibration table based on the target drive current; calibrating the picture to be displayed based on the target gray scale calibration table to obtain a calibrated picture;
and the display screen is used for displaying the calibrated picture.
13. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-10.
14. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-10.
CN202010605542.7A 2020-06-29 2020-06-29 Display processing method and device Active CN111767016B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010605542.7A CN111767016B (en) 2020-06-29 2020-06-29 Display processing method and device
PCT/CN2021/093241 WO2022001383A1 (en) 2020-06-29 2021-05-12 Display processing method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010605542.7A CN111767016B (en) 2020-06-29 2020-06-29 Display processing method and device

Publications (2)

Publication Number Publication Date
CN111767016A true CN111767016A (en) 2020-10-13
CN111767016B CN111767016B (en) 2023-09-26

Family

ID=72724034

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010605542.7A Active CN111767016B (en) 2020-06-29 2020-06-29 Display processing method and device

Country Status (2)

Country Link
CN (1) CN111767016B (en)
WO (1) WO2022001383A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112581909A (en) * 2020-12-30 2021-03-30 北京奕斯伟计算技术有限公司 Display compensation method and device and display device
WO2022001383A1 (en) * 2020-06-29 2022-01-06 Oppo广东移动通信有限公司 Display processing method and apparatus

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160300527A1 (en) * 2015-04-10 2016-10-13 Apple Inc. Luminance uniformity correction for display panels
CN107316608A (en) * 2017-08-17 2017-11-03 深圳市华星光电半导体显示技术有限公司 The driving method and device of a kind of organic light emitting diode display
CN108877676A (en) * 2018-08-07 2018-11-23 京东方科技集团股份有限公司 Voltage-drop compensation method and device thereof, display device
CN110767170A (en) * 2019-11-05 2020-02-07 深圳市华星光电半导体显示技术有限公司 Picture display method and picture display device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105280143B (en) * 2014-05-27 2018-05-18 西安宏祐图像科技有限公司 A kind of removing method of three grid pixel liquid crystal display panel Mura
CN104318900B (en) * 2014-11-18 2016-08-24 京东方科技集团股份有限公司 A kind of organic electroluminescence display device and method of manufacturing same and method
CN111767016B (en) * 2020-06-29 2023-09-26 Oppo广东移动通信有限公司 Display processing method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160300527A1 (en) * 2015-04-10 2016-10-13 Apple Inc. Luminance uniformity correction for display panels
CN107316608A (en) * 2017-08-17 2017-11-03 深圳市华星光电半导体显示技术有限公司 The driving method and device of a kind of organic light emitting diode display
CN108877676A (en) * 2018-08-07 2018-11-23 京东方科技集团股份有限公司 Voltage-drop compensation method and device thereof, display device
CN110767170A (en) * 2019-11-05 2020-02-07 深圳市华星光电半导体显示技术有限公司 Picture display method and picture display device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022001383A1 (en) * 2020-06-29 2022-01-06 Oppo广东移动通信有限公司 Display processing method and apparatus
CN112581909A (en) * 2020-12-30 2021-03-30 北京奕斯伟计算技术有限公司 Display compensation method and device and display device

Also Published As

Publication number Publication date
WO2022001383A1 (en) 2022-01-06
CN111767016B (en) 2023-09-26

Similar Documents

Publication Publication Date Title
CN111738122B (en) Image processing method and related device
CN111782879B (en) Model training method and device
CN111553846B (en) Super-resolution processing method and device
WO2021147396A1 (en) Icon management method and smart terminal
CN111882642B (en) Texture filling method and device for three-dimensional model
CN111768416A (en) Photo clipping method and device
CN112598594A (en) Color consistency correction method and related device
CN111555825B (en) Radio frequency resource allocation method and device
CN111612723B (en) Image restoration method and device
CN111768352A (en) Image processing method and device
WO2022001383A1 (en) Display processing method and apparatus
CN111953627A (en) Method and device for detecting SSB serial number
CN111381996A (en) Memory exception handling method and device
CN111524528B (en) Voice awakening method and device for preventing recording detection
CN114422686A (en) Parameter adjusting method and related device
CN111880661A (en) Gesture recognition method and device
CN114384465A (en) Azimuth angle determination method and device
CN113781959B (en) Interface processing method and device
CN111581119B (en) Page recovery method and device
CN115775395A (en) Image processing method and related device
CN115390738A (en) Scroll screen opening and closing method and related product
CN111865369B (en) Antenna control method, antenna control device and storage medium
CN115691370A (en) Display control method and related device
CN114172596A (en) Channel noise detection method and related device
CN112712378A (en) After-sale service management system in service community mode

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant