CN115829841A - Screen burn-in prevention method, device and equipment - Google Patents

Screen burn-in prevention method, device and equipment Download PDF

Info

Publication number
CN115829841A
CN115829841A CN202211658636.6A CN202211658636A CN115829841A CN 115829841 A CN115829841 A CN 115829841A CN 202211658636 A CN202211658636 A CN 202211658636A CN 115829841 A CN115829841 A CN 115829841A
Authority
CN
China
Prior art keywords
processed
image frame
sub
motion
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211658636.6A
Other languages
Chinese (zh)
Inventor
葛中峰
查林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Xinxin Microelectronics Technology Co Ltd
Original Assignee
Qingdao Xinxin Microelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Xinxin Microelectronics Technology Co Ltd filed Critical Qingdao Xinxin Microelectronics Technology Co Ltd
Priority to CN202211658636.6A priority Critical patent/CN115829841A/en
Publication of CN115829841A publication Critical patent/CN115829841A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)

Abstract

The application discloses a screen burning prevention method, device and equipment. The method comprises the steps of obtaining a current image frame to be processed, and dividing the image frame to be processed into a plurality of sub image frames to be processed; determining the motion degrees of the image frame to be processed and the plurality of sub-image frames to be processed based on a plurality of reference sub-image frames contained in at least one reference image frame in a preset time length before the image frame to be processed; determining respective motion factors of the image frame to be processed and the plurality of sub-image frames to be processed based on respective motion degrees of the image frame to be processed and the plurality of sub-image frames to be processed; obtaining a target pixel value of each pixel point in the image frame to be processed based on the image frame to be processed, the motion factors of each sub-image frame to be processed and the pixel value of each pixel point in the image frame to be processed; based on the target pixel value of each pixel point in the image frame to be processed, the pixel value of each pixel point in the image frame to be processed is adjusted downwards, and the screen burning prevention effect is effectively improved.

Description

Method, device and equipment for preventing screen from being burnt
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a method, an apparatus, and a device for preventing screen burn.
Background
Due to the limitation of the OLED material at the present stage, when the OLED material is heated for a long time or is locally highlighted, the OLED material is prone to serious characteristic changes, so that normal molecules cannot deflect correctly, the screen is lost, if the screen burning time is short, the screen can be recovered, and if the screen burning time is too serious, the screen is irreversible, and the screen is scrapped. The screen burning is fundamentally caused by the problem of OLED materials, and the OLED materials cannot be at high temperature and high brightness for a long time, so that the molecules of the materials are deformed and cannot deflect normally; the most fundamental method for solving screen burning finds more appropriate materials, which are resistant to high temperature and high heat, or improves the process to reach the resistance to high temperature and high heat; at present, the limitation of materials and process level cannot solve the problem, so that the current screen burning prevention method mainly comprises two methods of temperature reduction and brightness reduction.
In the related technology, a local bright screen mode is often adopted, so that the phenomenon of screen burning caused by heating of a screen due to long-time local bright screen of display equipment is avoided, but a static area and a fault caused by different processing of a motion area are easily introduced by local processing, the brightness of a display cannot be accurately controlled, and the screen burning prevention effect cannot be achieved.
Disclosure of Invention
The application aims to provide a screen burning prevention method, a screen burning prevention device and screen burning prevention equipment, and the method, the device and the equipment are used for solving the problem that the brightness of a display cannot be accurately controlled and the screen burning prevention effect cannot be achieved in the related technology.
In a first aspect, the present application provides a method of preventing burn-in, the method comprising:
acquiring a current image frame to be processed, and dividing the image frame to be processed into a plurality of sub image frames to be processed;
determining the motion degrees of the image frame to be processed and the plurality of sub-image frames to be processed based on a plurality of reference sub-image frames contained in at least one reference image frame in a preset time length before the image frame to be processed;
determining a motion factor of the image frame to be processed based on the motion degree of the image frame to be processed; determining a motion factor of each sub-image frame to be processed based on the motion degrees of the plurality of sub-image frames to be processed;
obtaining a target pixel value of each pixel point in the image frame to be processed based on the image frame to be processed and the motion factors of each sub image frame to be processed and the pixel value of each pixel point in the image frame to be processed;
and based on the target pixel value of each pixel point in the image frame to be processed, downwards adjusting the pixel value of each pixel point in the image frame to be processed.
In one possible implementation, the determining, based on a plurality of reference sub-image frames included in each of at least one reference image frame within a preset time period before the image frame to be processed, a degree of motion of the plurality of sub-image frames to be processed includes:
for each sub-image frame to be processed, respectively executing the following operations:
acquiring a first pixel value of a reference sub-image frame corresponding to the sub-image frame to be processed;
determining a second pixel value of the sub-image frame to be processed based on the pixel values of all pixel points in the sub-image frame to be processed;
determining a pixel value difference between the to-be-processed sub-image frame and the reference sub-image frame based on the first pixel value and the second pixel value;
and determining the motion degree of the sub-image frame to be processed based on the pixel value difference and at least one preset difference threshold value.
In a possible implementation manner, the determining a second pixel value of the to-be-processed sub-image frame based on pixel values of all pixel points in the to-be-processed sub-image frame includes:
obtaining an average pixel value of the pixels in the sub-image frame to be processed based on the pixel values of all the pixels in the sub-image frame to be processed and the number of the pixels in the sub-image frame to be processed;
and determining a second pixel value of the sub-image frame to be processed based on the pixel value of the pixel point with the maximum pixel value in the sub-image frame to be processed, the pixel value of the pixel point with the minimum pixel value and the average pixel value of the pixel points in the sub-image frame to be processed.
In a possible implementation, the determining the motion degree of the image frame to be processed includes:
determining the number of the sub image frames to be processed with changed pixel values based on the pixel value difference values between the plurality of sub image frames to be processed and the plurality of reference sub image frames;
determining the occupation ratio of the sub-image frames to be processed with the changed pixel values based on the number of the sub-image frames to be processed with the changed pixel values and the number of the plurality of sub-image frames to be processed;
and determining the motion degree of the image frame to be processed based on the ratio of the sub image frames to be processed with the changed pixel values and the motion degree of each sub image frame to be processed.
In a possible implementation manner, the motion factor of the image frame to be processed is determined based on the motion degree of the image frame to be processed; and determining a motion factor of each sub-image frame to be processed based on the motion degrees of the plurality of sub-image frames to be processed, including:
determining a motion factor corresponding to the image frame to be processed based on a preset corresponding relation between the motion degree of the image frame to be processed and the motion factor of the image frame to be processed;
and determining a motion factor corresponding to each sub image frame to be processed based on the corresponding relation between the pre-configured motion degree of the sub image frame to be processed and the motion factor of the sub image frame to be processed, and recording the motion factor as a first motion factor.
In one possible embodiment, the method further comprises:
for each sub-image frame to be processed, respectively executing the following operations:
acquiring reference motion factors of reference sub-image frames corresponding to the sub-image frames to be processed in each reference sub-image frame of a previous image frame of the image frames to be processed;
and performing time filtering processing on the sub-image frame to be processed based on the first motion factor corresponding to the sub-image frame to be processed and the reference motion factor, and determining a second motion factor corresponding to the sub-image frame to be processed.
In a possible implementation manner, the determining a target pixel value of each pixel point in the image frame to be processed based on the motion factors of the image frame to be processed and the sub-image frames to be processed and the pixel value of each pixel point in the image frame to be processed includes:
respectively multiplying the motion factor of the image frame to be processed with the second motion factor of each sub-image frame to be processed, and then performing normalization processing to obtain a target motion factor of each sub-image frame to be processed;
carrying out interpolation amplification processing on the target motion factor of each sub-image frame to be processed to obtain the target motion factor of each pixel point in the image frame to be processed;
and determining a target pixel value of each pixel point in the image frame to be processed based on the target motion factor of each pixel point in the image frame to be processed and the pixel value of each pixel point in the image frame to be processed.
In a possible implementation manner, the performing interpolation amplification processing on the target motion factor of each sub-image frame to be processed to obtain the target motion factor of each pixel point in the image frame to be processed includes:
determining a step length in a first direction and a step length in a second direction required for amplifying each sub-image frame to be processed to the image frame to be processed based on the size of the image frame to be processed and the number of the sub-image frames to be processed; wherein the first direction and the second direction are perpendicular;
respectively executing the following operations for each pixel point in the image frame to be processed:
determining position information of a sampling point corresponding to the pixel point and a target motion factor corresponding to the sampling point based on the position information of the pixel point, the step length in the first direction and the step length in the second direction; the sampling points are a plurality of pixel points in the neighborhood of the pixel points;
determining weight coefficients respectively corresponding to the pixel points and the sampling points based on the position information of the sampling points corresponding to the pixel points;
and determining the target motion factor of the pixel point based on the weight coefficient respectively corresponding to the pixel point and the sampling point and the target motion factor corresponding to the sampling point.
In one possible embodiment, the method further comprises:
and carrying out low-pass filtering processing on the target motion factor of each pixel point in the image frame to be processed.
In a second aspect, the present application provides a burn-in prevention device, the device comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a current image frame to be processed and dividing the image frame to be processed into a plurality of sub image frames to be processed;
the motion degree determining module is used for determining the motion degrees of the image frame to be processed and the plurality of sub-image frames to be processed based on a plurality of reference sub-image frames contained in at least one reference image frame in a preset time length before the image frame to be processed;
the motion factor determining module is used for determining a motion factor of the image frame to be processed based on the motion degree of the image frame to be processed; determining a motion factor of each sub image frame to be processed based on the motion degrees of the plurality of sub image frames to be processed;
the pixel value adjusting module is used for obtaining a target pixel value of each pixel point in the image frame to be processed based on the image frame to be processed, the motion factors of each sub-image frame to be processed and the pixel value of each pixel point in the image frame to be processed; and based on the target pixel value of each pixel point in the image frame to be processed, downwards adjusting the pixel value of each pixel point in the image frame to be processed.
In a third aspect, the present application provides a display device comprising:
a display interface, a processor, and a memory;
the display interface is used for displaying the image frame to be processed and the plurality of sub image frames to be processed;
the memory to store the processor-executable instructions;
the processor is configured to execute the instructions to implement a method of burn-in prevention as described in any of the first aspects above.
In a fourth aspect, the present application provides a computer-readable storage medium, wherein instructions, when executed by an electronic device, enable the electronic device to perform the method for preventing burn-in as described in any one of the above first aspects.
In a fifth aspect, the present application provides a computer program product comprising a computer program:
the computer program, when executed by a processor, implements a method of burn-in prevention as described in any of the first aspects above.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
the method comprises the steps of obtaining a current image frame to be processed, and dividing the image frame to be processed into a plurality of sub image frames to be processed; determining the motion degrees of the image frame to be processed and the plurality of sub-image frames to be processed based on a plurality of reference sub-image frames contained in at least one reference image frame in a preset time length before the image frame to be processed; determining a motion factor of the image frame to be processed based on the motion degree of the image frame to be processed; determining a motion factor of each sub-image frame to be processed based on the motion degrees of the plurality of sub-image frames to be processed; obtaining a target pixel value of each pixel point in the image frame to be processed based on the image frame to be processed, the motion factors of each sub-image frame to be processed and the pixel value of each pixel point in the image frame to be processed; and based on the target pixel value of each pixel point in the image frame to be processed, downwards adjusting the pixel value of each pixel point in the image frame to be processed.
Therefore, by determining the respective motion factors of the global image frame and the local image frame and combining the motion factors of the global image frame and the local image frame, the method eliminates the problem of transition and mutation of the images in a spatial domain, can accurately control the brightness of the images of the display, reduces high brightness, saves cost, controls the local brightness more accurately, and effectively improves the screen burning prevention effect.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is an application scene diagram of a method for preventing screen burn-in provided in an embodiment of the present application;
fig. 2 is a block diagram of a hardware structure of a display device according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a method for preventing screen from being burned in according to an embodiment of the present application;
fig. 4 is a schematic diagram of a plurality of to-be-processed sub-image frames according to an embodiment of the present application;
fig. 5 is a schematic diagram of image frame data to be processed according to an embodiment of the present application;
fig. 6 is a flowchart for determining a motion degree of a sub-image frame to be processed according to an embodiment of the present application;
fig. 7 is a schematic diagram illustrating a relationship between a pixel value difference and a motion degree according to an embodiment of the present disclosure;
fig. 8 is a schematic flowchart illustrating a process of determining a second pixel value of a sub-image frame to be processed according to an embodiment of the present application;
fig. 9 is a flowchart for determining a motion degree of an image frame to be processed according to an embodiment of the present application;
fig. 10 is a flowchart of determining a motion factor according to an embodiment of the present application;
fig. 11 is a schematic diagram illustrating a relationship between a motion degree of an image frame to be processed and a motion factor of the image frame to be processed according to an embodiment of the present application;
fig. 12 is a schematic diagram illustrating a relationship between a motion degree of a sub-image frame to be processed and a motion factor of the sub-image frame to be processed according to an embodiment of the present application;
fig. 13 is a schematic flowchart illustrating a process of determining a motion factor of a sub-image frame to be processed according to an embodiment of the present application;
fig. 14 is a schematic flowchart of determining a target pixel value of each pixel point in an image frame to be processed according to an embodiment of the present disclosure;
fig. 15 is a schematic flowchart of a process of determining a target motion factor of each pixel point in an image frame to be processed according to an embodiment of the present application;
fig. 16 is a schematic diagram illustrating an effect of performing interpolation amplification processing on a target motion factor of a sub-image frame to be processed according to an embodiment of the present application;
fig. 17 is a schematic view of a window during low-pass filtering according to an embodiment of the present disclosure;
fig. 18 is a schematic view of a display device according to an embodiment of the present application;
fig. 19 is a schematic view of a screen burn-in prevention device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. The embodiments described herein are part of the present application and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Also, in the description of the embodiments of the present application, "/" indicates an inclusive meaning unless otherwise specified, for example, a/B may indicate a or B; "and/or" in the text is only an association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: three cases of a alone, a and B both, and B alone exist, and in addition, "a plurality" means two or more than two in the description of the embodiments of the present application.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the features, and in the description of embodiments of the application, unless stated otherwise, "plurality" means two or more.
Due to the limitation of the OLED material at the present stage, when the OLED material is heated for a long time or is locally highlighted, the OLED material is prone to serious characteristic changes, so that normal molecules cannot deflect correctly, the screen is lost, if the screen burning time is short, the screen can be recovered, and if the screen burning time is too serious, the screen is irreversible, and the screen is scrapped. The screen burning is fundamentally caused by the problem of OLED materials, and the OLED materials cannot be at high temperature and high brightness for a long time, so that the molecules of the materials are deformed and cannot deflect normally; the most fundamental method for solving screen burning finds more appropriate materials, which are resistant to high temperature and high heat, or improves the process to reach the resistance to high temperature and high heat; at present, the limitation of materials and process level cannot solve the problem, so that the current screen burning prevention method mainly comprises two methods of temperature reduction and brightness reduction.
In the related technology, a local bright screen mode is often adopted, so that the phenomenon of screen burning caused by heating of a screen due to long-time local bright screen of display equipment is avoided, but a static area and a fault caused by different processing of a motion area are easily introduced by local processing, the brightness of a display cannot be accurately controlled, and the screen burning prevention effect cannot be achieved.
In view of this, the present application provides a method, an apparatus, and a device for preventing screen burn-in, so as to solve the problem that the related art cannot accurately control the brightness of the display and cannot achieve the effect of preventing screen burn-in.
The inventive concept of the present application can be summarized as follows: in the embodiment of the application, a current image frame to be processed is obtained, and the image frame to be processed is divided into a plurality of sub image frames to be processed; determining the motion degrees of the image frame to be processed and the plurality of sub-image frames to be processed based on a plurality of reference sub-image frames contained in at least one reference image frame in a preset time length before the image frame to be processed; determining respective motion factors of the image frame to be processed and the plurality of sub-image frames to be processed based on respective motion degrees of the image frame to be processed and the plurality of sub-image frames to be processed; obtaining a target pixel value of each pixel point in the image frame to be processed based on the image frame to be processed, the motion factors of each sub-image frame to be processed and the pixel value of each pixel point in the image frame to be processed; based on the target pixel value of each pixel point in the image frame to be processed, the pixel value of each pixel point in the image frame to be processed is adjusted downwards, and the screen burning prevention effect is effectively improved.
After the main inventive concepts of the embodiments of the present application are introduced, an application scenario diagram of a screen burn-in prevention method provided in the embodiments of the present application is described below with reference to the accompanying drawings. As shown in fig. 1, the control apparatus 100 and the display device 200 (also referred to as a terminal device) may communicate with each other in a wired or wireless manner.
Among them, the control apparatus 100 is configured to control the display device 200, which may receive an operation instruction input by a user and convert the operation instruction into an instruction recognizable and responsive by the display device 200, serving as an intermediary for interaction between the user and the display device 200. Such as: the user responds to the operation of the digital signal switch by operating the digital signal switch on the control device 100 by the display device 200.
The control device 100 may be a remote controller 100A, which includes infrared protocol communication or bluetooth protocol communication, and other short-distance communication methods, etc. to control the display apparatus 200 in a wireless or other wired manner. The user may input a user command through a key on a remote controller, a voice input, a control panel input, etc. to control the display apparatus 200. Such as: the user can input a corresponding control command through a volume up/down key, a channel control key, up/down/left/right moving keys, a voice input key, a menu key, a power on/off key, etc. on the remote controller, to implement the function of controlling the display device 200.
The control device 100 may also be a smart device, such as a mobile terminal 100B, a tablet computer, a notebook computer, and so on. For example, the display device 200 is controlled using an application program running on the smart device. The application program can provide various images for a user through an intuitive User Interface (UI) on a screen associated with the smart device through configuration.
For example, the mobile terminal 100B may install a software application with the display device 200 to implement connection communication through a network communication protocol for the purpose of one-to-one control operation and data communication. Such as: the mobile terminal 100B may be caused to establish a control instruction protocol with the display device 200, and the functions of the physical keys as arranged by the remote control 100A may be implemented by operating various function keys or virtual controls of the user interface provided on the mobile terminal 100B. The audio and video contents displayed on the mobile terminal 100B may also be transmitted to the display device 200, so as to implement a synchronous display function.
The display apparatus 200 may provide a network television function of a broadcast receiving function and a computer support function. The display device may be implemented as a digital television, a web television, an Internet Protocol Television (IPTV), or the like.
The display device 200 may be a liquid crystal display, an organic light emitting display, a projection device. The specific display device type, size, resolution, etc. are not limited.
The display apparatus 200 also performs data communication with the server 300 through various communication means. Here, the display apparatus 200 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 300 may provide various contents and interactions to the display apparatus 200. By way of example, the display device 200 may send and receive information such as: receiving Electronic Program Guide (EPG) data, receiving software program updates, or accessing a remotely stored digital media library. The servers 300 may be a group or groups, and may be one or more types of servers. Other web service contents such as a video on demand and an advertisement service are provided through the server 300.
A hardware configuration block diagram of the display device 200 is exemplarily shown in fig. 2. As shown in fig. 2, a tuner demodulator 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a memory 260, a user interface 265, a video processor 270, a display 275, a rotating assembly 276, an audio processor 280, an audio output interface 285, and a power supply 290 may be included in the display apparatus 200.
The tuning demodulator 210 receives the broadcast television signal in a wired or wireless manner, may perform modulation and demodulation processing such as amplification, mixing, resonance, and the like, and is configured to demodulate, from a plurality of wireless or wired broadcast television signals, an audio/video signal carried in a frequency of a television channel selected by a user, and additional information (e.g., EPG data).
The tuner demodulator 210 is responsive to the user selected frequency of the television channel and the television signal carried by the frequency, as selected by the user and as controlled by the controller 250.
The tuner demodulator 210 can receive a television signal in various ways according to the broadcasting system of the television signal, such as: terrestrial broadcasting, cable broadcasting, satellite broadcasting, internet broadcasting, or the like; and according to different modulation types, a digital modulation mode or an analog modulation mode can be adopted; and can demodulate the analog signal and the digital signal according to the different kinds of the received television signals.
In other exemplary embodiments, the tuning demodulator 210 may also be in an external device, such as an external set-top box. In this way, the set-top box outputs a television signal after modulation and demodulation, and inputs the television signal into the display apparatus 200 through the external device interface 240.
The communicator 220 is a component for communicating with an external device or an external server according to various communication protocol types. For example, the display apparatus 200 may transmit content data to an external apparatus connected via the communicator 220, or browse and download content data from an external apparatus connected via the communicator 220. The communicator 220 may include a network communication protocol module or a near field communication protocol module, such as a WIFI module 221, a bluetooth communication protocol module 222, and a wired ethernet communication protocol module 223, so that the communicator 220 may receive a control signal of the control device 100 according to the control of the controller 250 and implement the control signal as a WIFI signal, a bluetooth signal, a radio frequency signal, and the like.
The detector 230 is a component of the display apparatus 200 for collecting signals of an external environment or interacting with the outside. The detector 230 may include a sound collector 231, such as a microphone, which may be used to receive a user's sound, such as a voice signal of a control instruction of the user to control the display device 200; alternatively, ambient sounds may be collected that identify the type of ambient scene, enabling the display device 200 to adapt to ambient noise.
In some other exemplary embodiments, the detector 230, which may further include an image collector 232, such as a camera, a video camera, etc., may be used to collect external environment scenes to adaptively change the display parameters of the display device 200.
In some other exemplary embodiments, the detector 230 may further include a light receiver for collecting the intensity of the ambient light to adapt to the display parameter variation of the display device 200.
In some other exemplary embodiments, the detector 230 may further include a temperature sensor, such as by sensing an ambient temperature, and the display device 200 may adaptively adjust a display color temperature of the image. For example, when the temperature is higher, the display apparatus 200 may be adjusted to display a color temperature of an image that is cooler; when the temperature is lower, the display device 200 may be adjusted to display a warmer color temperature of the image.
The external device interface 240 is a component for providing the controller 250 to control data transmission between the display apparatus 200 and an external apparatus. The external device interface 240 may be connected to an external apparatus such as a set-top box, a game device, a notebook computer, etc. in a wired/wireless manner, and may receive data such as a video signal (e.g., moving image), an audio signal (e.g., music), additional information (e.g., EPG), etc. of the external apparatus.
The external device interface 240 may include: a High Definition Multimedia Interface (HDMI) terminal 241, a Composite Video Blanking Sync (CVBS) terminal 242, an analog or digital Component terminal 243, a Universal Serial Bus (USB) terminal 244, a Component terminal (not shown), a red, green, blue (RGB) terminal (not shown), and the like.
The controller 250 controls the operation of the display device 200 and responds to the operation of the user by running various software control programs (such as an operating system and various application programs) stored on the memory 260.
As shown in fig. 2, the controller 250 includes a Random Access Memory (RAM) 251, a Read Only Memory (ROM) 252, a graphics processor 253, a CPU processor 254, a communication interface 255, and a communication bus 256. The RAM251, the ROM252, the graphic processor 253, and the CPU processor 254 are connected to each other through a communication bus 256 through a communication interface 255.
The ROM252 stores various system startup instructions. When the display apparatus 200 starts power-on upon receiving the power-on signal, the CPU processor 254 executes a system boot instruction in the ROM252, copies the operating system stored in the memory 260 to the RAM251, and starts running the boot operating system. After the start of the operating system is completed, the CPU processor 254 copies the various application programs in the memory 260 to the RAM251 and then starts running the various application programs.
A CPU processor 254 for executing operating system and application program instructions stored in memory 260. And according to the received input instruction, processing of various application programs, data and contents is executed so as to finally display and play various audio-video contents.
In some exemplary embodiments, the CPU processor 254 may comprise a plurality of processors. The plurality of processors may include one main processor and a plurality of or one sub-processor. A main processor for performing some initialization operations of the display apparatus 200 in the display apparatus preload mode and/or operations of displaying a screen in the normal mode. A plurality of or one sub-processor for performing an operation in a state of a standby mode or the like of the display apparatus.
The communication interface 255 may include a first interface to an nth interface. These interfaces may be network interfaces that are connected to external devices via a network.
The controller 250 may control the overall operation of the display apparatus 200. For example: in response to receiving a user input command for selecting a GUI object displayed on the display 275, the controller 250 may perform an operation related to the object selected by the user input command.
Where the object may be any one of the selectable objects, such as a hyperlink or an icon. The operation related to the selected object is, for example, an operation of displaying a link to a hyperlink page, document, image, or the like, or an operation of executing a program corresponding to the object. The user input command for selecting the GUI object may be a command input through various input means (e.g., a mouse, a keyboard, a touch panel, etc.) connected to the display apparatus 200 or a voice command corresponding to a voice spoken by the user.
A memory 260 for storing various types of data, software programs, or applications for driving and controlling the operation of the display device 200. The memory 260 may include volatile and/or non-volatile memory. And the term "memory" includes the memory 260, the RAM251 and ROM252 of the controller 250, or a memory card in the display device 200.
In some embodiments, the memory 260 is specifically used for storing an operating program for driving the controller 250 of the display device 200; storing various application programs built in the display apparatus 200 and downloaded by a user from an external apparatus; data such as visual effect images for configuring various GUIs provided by the display 275, various objects related to the GUIs, and selectors for selecting the GUI objects are stored.
In some embodiments, memory 260 is specifically configured to store drivers for tuner demodulator 210, communicator 220, detector 230, external device interface 240, video processor 270, display 275, audio processor 280, etc., and related data, such as external data (e.g., audio-visual data) received from the external device interface.
In some embodiments, memory 260 specifically stores software and/or programs representing an Operating System (OS), which may include, for example: a kernel, middleware, an Application Programming Interface (API), and/or an application program. Illustratively, the kernel may control or manage system resources, as well as functions implemented by other programs (e.g., the middleware, APIs, or applications); at the same time, the kernel may provide an interface to allow middleware, APIs, or applications to access the controller to enable control or management of system resources.
The display device 200 in the embodiment of the present application may be an electronic device including, but not limited to, a smart phone, a tablet computer, a wearable electronic device (e.g., a smart watch), a notebook computer, a smart television, and the like.
To further illustrate the technical solutions provided by the embodiments of the present application, the following detailed description is made with reference to the accompanying drawings and the detailed description. Although the embodiments of the present application provide method steps as shown in the following embodiments or figures, more or fewer steps may be included in the method based on conventional or non-inventive efforts. In steps where no necessary causal relationship exists logically, the order of execution of the steps is not limited to that provided by the embodiments of the present application.
Referring to fig. 3, a schematic flow chart of a method for preventing screen burn-in provided in the embodiment of the present application is shown. As shown in fig. 3, the method comprises the steps of:
in step 301, a current image frame to be processed is acquired, and the image frame to be processed is divided into a plurality of sub-image frames to be processed.
For example, assuming that the resolution of the image frame to be processed is 3840 × 2160, the image frame to be processed may be uniformly divided into 64 × 64 sub-image frames to be processed, and the resolution of each sub-image frame to be processed is 60 × 34 or 60 × 33, i.e., each sub-image frame to be processed includes 60 × 34 or 60 × 33 pixel points. As shown in fig. 4, a schematic diagram of dividing an image frame to be processed into a plurality of sub-image frames to be processed is shown. Wherein (i, j) represents the coordinates of each sub-image frame to be processed.
Data in the DDR is generally written to a certain address in the DDR through WirteDMA, and then the data is integrated. As shown in fig. 5, the format of the DDR data of 64 × 64 sub-image frames to be processed is shown. Because the DDR data generally idles the data behind 8 12-bit data, that is, each data is 128bit, and idles the 32-bit data after 96 bits, when obtaining image frame data to be processed from the DDR, it is necessary to obtain each 12-bit valid data after the DDR data is completely read, and remove the data invalid by 32 bits.
In step 302, based on a plurality of reference sub-image frames included in at least one reference image frame within a preset time length before the image frame to be processed, the motion degrees of the image frame to be processed and the plurality of sub-image frames to be processed are determined.
In step 303, determining a motion factor of the image frame to be processed based on the motion degree of the image frame to be processed; and determining a motion factor of each sub-image frame to be processed based on the motion degrees of the plurality of sub-image frames to be processed.
In step 304, a target pixel value of each pixel point in the image frame to be processed is obtained based on the image frame to be processed and the motion factor of each sub-image frame to be processed, and the pixel value of each pixel point in the image frame to be processed.
In step 305, the pixel value of each pixel point in the image frame to be processed is adjusted downward based on the target pixel value of each pixel point in the image frame to be processed.
In a possible implementation manner, in determining the degree of motion of the multiple sub-image frames to be processed based on the multiple reference sub-image frames included in each of the at least one reference image frame within the preset time period before the image frame to be processed, in the embodiment of the present application, for each sub-image frame to be processed, the steps shown in fig. 6 may be respectively performed:
in step 601, a first pixel value of a reference sub-image frame corresponding to a sub-image frame to be processed is obtained.
In step 602, a second pixel value of the to-be-processed sub-image frame is determined based on pixel values of all pixel points in the to-be-processed sub-image frame.
In step 603, a pixel value difference between the sub-image frame to be processed and the reference sub-image frame is determined based on the first pixel value and the second pixel value.
In step 604, a motion level of the sub-image frame to be processed is determined based on the pixel value difference and at least one preset difference threshold.
Example 1, in the present application, the motion degree of the sub-image frame to be processed is determined based on the pixel value difference and at least one preset difference threshold, and the motion degree of the sub-image frame to be processed may be determined directly according to a schematic diagram of a relationship between the pixel value difference and the motion degree as shown in fig. 7. The larger the difference value of the pixel values is, the more the motion degree of the sub-image frame to be processed tends to move, and the smaller the difference value of the pixel values is, the more the motion degree of the sub-image frame to be processed tends to be still.
Example 2: assuming that the motion degree is divided into still, motion and still and motion, if the pixel value difference between the sub-image frame to be processed and the reference sub-image frame is greater than the preset difference threshold TH1, it is a motion frame, and at this time, the value local _ static (i, j) =0 indicating the motion degree of the sub-image frame (i, j) to be processed; if the pixel value difference is smaller than a preset difference threshold value TH0, the image is a static frame, and at the moment, the value local _ static (i, j) of the motion degree of the sub-image frame (i, j) to be processed is added with 1; if the pixel value difference is greater than the preset difference threshold TH0 and less than the preset difference threshold TH1, it is between static and moving, and at this time, the value local _ static (i, j) representing the degree of motion of the sub-image frame (i, j) to be processed remains unchanged; then, the motion degree of the sub-image frame (i, j) to be processed is determined according to the value local _ static (i, j).
All preset threshold values in the embodiment of the present application are set according to empirical values, which is not limited to this.
In a possible implementation manner, in the embodiment of the present application, determining the second pixel value of the to-be-processed sub-image frame based on the pixel values of all pixel points in the to-be-processed sub-image frame, the steps shown in fig. 8 may be performed:
in step 801, based on the pixel values of all the pixel points in the sub-image frame to be processed and the number of the pixel points in the sub-image frame to be processed, an average pixel value of the pixel points in the sub-image frame to be processed is obtained;
in step 802, a second pixel value of the to-be-processed sub-image frame is determined based on the pixel value of the pixel point with the largest pixel value in the to-be-processed sub-image frame, the pixel value of the pixel point with the smallest pixel value, and the average pixel value of the pixel points in the to-be-processed sub-image frame.
In specific implementation, a second pixel value STA _ block of the sub-image frame to be processed is determined according to the formulas (1) to (3):
Y(m,n)=MAX{in_R(m,n),in_G(m,n),in_B(m,n);(1)
AVG_block=sum(Y(m,n))/block_num;(2)
STA_block=a*AVG_block+b*MAX_block+(1–(a+b))*MIN_block;(3)
wherein, the formula (1) represents that the maximum value of R, G, B values of the pixel point (m, n) is selected as the pixel value Y (m, n) of the pixel point (m, n); AVG _ block is the average pixel value of pixel points in the sub-image frame to be processed; sum (Y (m, n)) is the sum of pixel values of all pixel points in the sub-image frame to be processed; block _ num is the number of pixel points in the sub-image frame to be processed; a. b is a fixed parameter set according to an empirical value; MAX _ block is the pixel value of the pixel point with the largest pixel value in the sub-image frame to be processed; and MIN _ block is the pixel value of the pixel point with the minimum pixel value in the sub-image frame to be processed.
In a possible implementation manner, in the embodiment of the present application, for determining the motion degree of the image frame to be processed, the steps shown in fig. 9 may be performed:
in step 901, determining the number of sub image frames to be processed with changed pixel values based on the pixel value difference between the plurality of sub image frames to be processed and the plurality of reference sub image frames;
in step 902, determining the proportion of the sub-image frames to be processed with changed pixel values based on the number of the sub-image frames to be processed with changed pixel values and the number of the plurality of sub-image frames to be processed;
in step 903, the motion degree of the image frame to be processed is determined based on the ratio of the sub-image frames to be processed whose pixel values are changed and the motion degree of each sub-image frame to be processed.
For example, assuming that the motion degree is divided into only still, motion and between still and motion, if the ratio of the sub-image frame to be processed with the changed pixel value is greater than the preset threshold L1, it is a motion frame, and at this time, the value global _ static =0 representing the motion degree of the image frame to be processed; if the occupation ratio of the sub-image frame to be processed with the changed pixel value is less than a preset threshold value L0, but the numerical values local _ static (i, j) of the motion degrees of the sub-image frame to be processed with the changed pixel value are all 0, the numerical value global _ static =0 representing the motion degree of the image frame to be processed; if the ratio of the sub-image frame to be processed with the changed pixel value is smaller than a preset threshold value L0, but the ratio of the value local _ static (i, j) of the motion degree of the sub-image frame to be processed with the changed pixel value is 0 and is smaller than a preset ratio threshold value P0, adding 1 to the value global _ static representing the motion degree of the image frame to be processed; if the ratio of the changed sub-image frame to be processed is greater than a preset threshold value L0 and less than a preset threshold value L1, the image frame is between static and motion, and a value global _ static representing the motion degree of the image frame to be processed is kept unchanged; then, the motion degree of the image frame to be processed is determined according to the value lgglobal _ static.
In one possible implementation, a motion factor of the image frame to be processed is determined based on the motion degree of the image frame to be processed; and determining a motion factor of each of the sub image frames to be processed based on the motion degrees of the plurality of sub image frames to be processed, the steps as shown in fig. 10 may be performed:
in step 1001, a motion factor corresponding to the image frame to be processed is determined based on a pre-configured correspondence between the motion degree of the image frame to be processed and the motion factor of the image frame to be processed.
For example, the motion factor global _ gain corresponding to the image frame to be processed may be determined according to the correspondence between the degree of motion of the image frame to be processed and the motion factor of the image frame to be processed, which is configured in advance as shown in fig. 11, and using formula (4):
global_gain=1024–f(global_static-10*60*60,max_64*64);(4)
wherein f (global _ static-10 × 60) represents the magnitude of the motion factor global _ gain that needs to be changed when the global _ static of the motion degree of the image frame to be processed is changed from global _ static to 10 × 60. For example, when the motion factor corresponding to global _ gain is 924 and the motion factor corresponding to 10 × 60 is 1024, f (global _ static-10 × 60) =100. Wherein, the formula (4) is that if the value global _ static representing the motion degree of the image frame to be processed is less than 10 × 60 (hz), the motion factor corresponding to the image frame to be processed is global _ gain =1024 according to the content shown in fig. 11; if the value global _ static representing the degree of motion of the image frame to be processed is greater than 10 × 60 (hz) and less than max _64 × 64, the motion factor corresponding to the image frame to be processed is global _ gain =1024-f (global _ static-10 × 60) = f (global _ static) according to the content shown in fig. 11; if the value global _ static representing the degree of motion of the image frame to be processed is greater than max _64 × 64, the motion factor corresponding to the image frame to be processed is global _ gain = f (max _64 × 64) = global _ gain _ min according to the content shown in fig. 11.
In step 1002, a motion factor corresponding to each sub-image frame to be processed is determined based on a pre-configured correspondence relationship between the degree of motion of the sub-image frame to be processed and the motion factor of the sub-image frame to be processed, and is recorded as a first motion factor.
For example, the motion factor local _ gain (i, j) corresponding to the sub-image frame (i, j) to be processed may be determined according to the correspondence between the degree of motion of the pre-configured sub-image frame to be processed and the motion factor of the sub-image frame to be processed as shown in fig. 12 and using formula (5):
local_gain(i,j)=4095–f(local_static(i,j)-10*60*60,max_64*64);(5)
wherein f (local _ gain (i, j) -10 × 60) represents the magnitude of the motion factor local _ gain (i, j) required to change from local _ gain (i, j) to 10 × 60 of the motion degree of the sub-image frame (i, j) to be processed. For example, if the motion factor corresponding to local _ gain (i, j) is 3095, and the motion factor corresponding to 10 × 60 is 4095, then f (local _ gain (i, j) -10 × 60) =1000. Wherein, in the formula (5), if the numerical value local _ static (i, j) representing the motion degree of the sub image frame (i, j) to be processed is less than 10 × 60 (hz), the motion factor corresponding to the sub image frame (i, j) to be processed is local _ gain (i, j) =4095 according to the content shown in fig. 12; if the value local _ static (i, j) representing the motion degree of the sub-image frame (i, j) to be processed is greater than 10 × 60 (hz) and less than max _64 × 64, then the motion factor corresponding to the sub-image frame (i, j) to be processed is local _ gain (i, j) =4095-f (local _ gain (i, j) -10 × 60) = f (local _ static (i, j)); if the value local _ static (i, j) indicating the motion degree of the sub-image frame (i, j) to be processed is greater than max _64 × 64, the motion factor corresponding to the sub-image frame (i, j) to be processed is local _ gain (i, j) = f (max _64 × 64) = local _ gain (i, j) _ min according to the content shown in fig. 12.
In a possible implementation manner, when determining the motion factor of each sub-image frame to be processed, the embodiment of the present application may further perform, for each sub-image frame to be processed, the steps as shown in fig. 13:
in step 1301, acquiring a reference motion factor of a reference sub-image frame corresponding to a sub-image frame to be processed in each reference sub-image frame of a previous image frame of the image frame to be processed;
in step 1302, a temporal filtering process is performed on the sub-image frame to be processed based on the first motion factor and the reference motion factor corresponding to the sub-image frame to be processed, and a second motion factor corresponding to the sub-image frame to be processed is determined.
In specific implementation, the second motion factor 2local_gain (i, j) corresponding to the sub-image frame (i, j) to be processed may be determined according to formula (6):
2local_gain(i,j)=a*local_gain(i,j)+(1-a)*pre(i,j);(6)
wherein, the local _ gain (i, j) is a first motion factor corresponding to the sub-image frame (i, j) to be processed; pre (i, j) is a reference motion factor of the reference sub-image frame; a is a fixed parameter.
In a possible implementation manner, in the embodiment of the present application, based on the motion factors of the image frame to be processed and each of the sub-image frames to be processed, and the pixel value of each pixel point in the image frame to be processed, determining the target pixel value of each pixel point in the image frame to be processed may be implemented as the steps shown in fig. 14:
in step 1401, the motion factor of the image frame to be processed is multiplied by the second motion factor of each sub-image frame to be processed, and then normalization processing is performed to obtain the target motion factor of each sub-image frame to be processed.
In a specific implementation, the target motion factor 3local _gain (i, j) of the sub-image frame (i, j) to be processed may be determined according to formula (7):
3local_gain(i,j)=2local_gain(i,j)*global_gain/1024*4;(7)
2local _gain (i, j) is a second motion factor corresponding to the sub-image frame (i, j) to be processed; global _ gain is a motion factor of the image frame to be processed.
In step 1402, the target motion factor of each sub-image frame to be processed is interpolated and amplified to obtain a target motion factor of each pixel point in the image frame to be processed.
In a possible implementation manner, in the embodiment of the present application, interpolation and amplification are performed on the target motion factor of each to-be-processed sub-image frame to obtain the target motion factor of each pixel point in the to-be-processed image frame, and the steps shown in fig. 15 may be performed:
in step 1501, determining a step length in a first direction and a step length in a second direction required for amplifying each to-be-processed sub-image frame to the to-be-processed image frame based on the size of the to-be-processed image frame and the number of the to-be-processed sub-image frames; wherein the first direction and the second direction are perpendicular.
In step 1502, the operations of steps 1503-1505 are performed separately for each pixel point in the image frame to be processed.
In step 1503, based on the position information of the pixel point, the step length in the first direction and the step length in the second direction, the position information of the sampling point corresponding to the pixel point and the target motion factor corresponding to the sampling point are determined; the sampling points are a plurality of pixel points in the neighborhood of the pixel points.
In step 1504, based on the position information of the sampling points corresponding to the pixel points, the weight coefficients corresponding to the pixel points and the sampling points, respectively, are determined.
In step 1505, the target motion factor of the pixel point is determined based on the weight coefficient corresponding to the pixel point and the target motion factor corresponding to the sampling point.
Illustratively, as shown in fig. 16, the right image is a sub-image frame to be processed, and the left image is a schematic effect diagram after interpolation and enlargement processing. Assuming that the image frame to be processed of 3840 × 2160 is divided into 64 × 64 sub-image frames to be processed, the target motion factors of the 64 × 64 sub-image frames to be processed need to be interpolated and amplified, so as to obtain the target motion factor of each pixel point in the image frame to be processed of 3840 × 2160. At this time, the step size in the first direction is determined according to formula (8), and the step size in the second direction is determined according to formula (9):
step_v=64*(2^20)/2160;(8)
step_h=64*(2^20)/3840;(9)
then, for each pixel point (m, n) in the image frame to be processed, based on the position information (m, n) of the pixel point, the step size step _ v in the first direction and the step size step _ h in the second direction, the method obtains the position information according to the following formula:
left = (n ^ step _ h)/(2 ^ 20), right = Left +1, top = (m ^ step _ v)/(2 ^ 20), bottom = Top +1; therefore, the position information of four sampling points of four neighborhoods corresponding to the pixel point (m, n) can be determined to be (Top, left), (Top, right), (Bottom, left), (Bottom, right), then the sub-image frame to be processed corresponding to each sampling point is determined according to the position information (Top, left), (Top, right), (Bottom, left) and (Bottom, right) of the four sampling points, and the target motion factors of the sub-image frame to be processed corresponding to the sampling points are used as the target motion factors corresponding to the sampling points, so that the corresponding target motion factors of the four sampling points are respectively data (Top, left), data (Top, right), data (Bottom, left), data (Bottom, right);
then, based on the position information (Top, left), (Top, right), (Bottom, left), (Bottom, right) of the sampling point corresponding to the pixel point, the corresponding weight coefficient between the pixel point (m, n) and each sampling point is determined according to the formula (10) to the formula (17):
Left_dis=(n*step_h-(Left*(2^20))/(2^10);(10)
right_dis=1024-Left_dis;(11)
top_dis=(m*step_v-(Top*(2^20))/(2^10);(12)
bottom_dis=1024-top_dis;(13)
weight _(Top,Left)= ((1024 - Left_dis) * (1024 - top_dis ))/(2^10); (14)
weight _(Top,Right)= ((1024 -right_dis)) * (1024 - top_dis ))/(2^10); (15)
weight_(Bottom,Left)=((1024-Left_dis)*(1024-bottom_dis))/(2^10);(16)
weight _(Bottom,Right)= ((1024 -right_dis) * (1024 - bottom_dis ))/(2^10); (17)
wherein, left _ dis, right _ dis, top _ dis, bottom _ dis in formula (10) -formula (17) respectively represent the distance from the pixel point (m, n) to the first direction and the distance from the pixel point (m, n) to the second direction of the four sampling points, that is, the distance from the pixel point (m, n) to the leftmost and the rightmost of the area surrounded by the four sampling points in the horizontal direction, and the distance from the pixel point (m, n) to the uppermost and the lowermost of the area surrounded by the four sampling points in the vertical direction; weight _ (Top, left), weight _ (Top, right), weight _ (Bottom, left), and weight _ (Bottom, right) respectively represent weight coefficients respectively corresponding to the pixel point (m, n) and the four sampling points.
Finally, based on the weight coefficients respectively corresponding to the pixel points and the sampling points
weight _ (Top, left), weight _ (Top, right), weight _ (Bottom, left), weight _ (Bottom, right) and target motion factor data (Top, left), data (Top, right), data (Bottom, left), data (Bottom, right) respectively corresponding to the four sampling points, and determining the motion factor data _ out (m, n) of the pixel point (m, n) according to the formula (18) as follows:
data_out(m,n)=
(data(Top,Left)*weight_(Top,Left)+data(Top,Right)*
weight_(Top,Right)+data(Bottom,Left)*weight_(Bottom,Left)+
data(Bottom,Right)*weight_(Bottom,Right))/(2^10);(18)
in a possible implementation manner, determining a target motion factor of each pixel point in an image frame to be processed according to the embodiment of the present application may further be implemented as: and performing low-pass filtering processing on the target motion factor of each pixel point in the image frame to be processed.
In specific implementation, the data _ out (m, n) of each pixel (m, n) is subjected to two-dimensional low-pass filtering processing by using 7 × 7widow during the low-pass filtering processing as shown in fig. 17, so as to obtain the final target motion factor local _ gain (m, n) of the pixel (m, n).
In step 1403, a target pixel value of each pixel point in the image frame to be processed is determined based on the target motion factor of each pixel point in the image frame to be processed and the pixel value of each pixel point in the image frame to be processed.
In specific implementation, the target pixel value out _ RGB (m, n) of each pixel point in the image frame to be processed may be determined according to formula (19):
out_RGB(m,n)=(in_RGB(m,n)*local_gain(m,n))÷(2^10);(19)
wherein out _ RGB (m, n) is a target pixel value of the pixel point (m, n); in _ RGB (m, n) is the pixel value of a pixel point (m, n) in the image frame to be processed; local _ gain (m, n) is the target motion factor of the pixel (m, n).
And finally, adjusting the pixel value of each pixel point in the image frame to be processed to be a target pixel value, thereby realizing the screen burning prevention effect by reducing the brightness of the display interface.
Based on the foregoing description, the embodiment of the application acquires a current image frame to be processed, and divides the image frame to be processed into a plurality of sub image frames to be processed; determining the motion degrees of the image frame to be processed and the plurality of sub-image frames to be processed based on a plurality of reference sub-image frames contained in at least one reference image frame in a preset time length before the image frame to be processed; determining a motion factor of the image frame to be processed based on the motion degree of the image frame to be processed; determining a motion factor of each sub-image frame to be processed based on the motion degrees of the plurality of sub-image frames to be processed; obtaining a target pixel value of each pixel point in the image frame to be processed based on the image frame to be processed, the motion factors of each sub-image frame to be processed and the pixel value of each pixel point in the image frame to be processed; and based on the target pixel value of each pixel point in the image frame to be processed, downwards adjusting the pixel value of each pixel point in the image frame to be processed.
Therefore, by determining the respective motion factors of the global image frame and the local image frame and combining the motion factors of the global image frame and the local image frame, the problem of picture transition mutation in a spatial domain is solved, the brightness of a display picture can be accurately controlled, highlight is reduced, the cost is saved, the local brightness is controlled more accurately, and the screen burn-in prevention effect is effectively improved.
Based on the same inventive concept, an embodiment of the present application further provides a display apparatus, as shown in fig. 18, including: an interface 1801, a processor 1802, a memory 1803, and a bus interface 1804 are displayed. Wherein:
a display interface 1801, configured to display the image frame to be processed and the multiple sub-image frames to be processed.
The processor 1802 is responsible for managing the bus architecture and general processing, and the memory 1803 may store data used by the processor 1802 in performing operations.
Among other things, in FIG. 18, the bus interface 1804 may include any number of interconnected buses and bridges, with one or more processors 1802 represented by a processor 1802, and various circuits of memory 1803 represented by memory 1803, linked together. The bus architecture may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. Bus interface 1804 provides an interface. Alternatively, the processor 1802 may be a CPU (central processing unit), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array), or a CPLD (Complex Programmable Logic Device), and the processor may also have a multi-core architecture.
The processor 1802 is configured to execute any method for preventing screen burn provided by the embodiments of the present application by calling the computer program stored in the memory 1803 according to the obtained executable instructions. The processor 1802 and the memory 1803 may also be physically separated.
It should be noted that the apparatus provided in the embodiment of the present invention can implement all the method steps implemented by the method embodiment, and can achieve the same technical effects, and detailed descriptions of the same parts and beneficial effects as the method embodiment in this embodiment are omitted here.
Based on the same inventive concept, the embodiment of the present application further provides a device for preventing screen from being burned, as shown in fig. 19, including:
an obtaining module 1901, configured to obtain a current image frame to be processed, and divide the image frame to be processed into a plurality of sub-image frames to be processed;
a motion degree determining module 1902, configured to determine, based on a plurality of reference sub-image frames included in at least one reference image frame within a preset time period before the image frame to be processed, respective motion degrees of the image frame to be processed and the plurality of sub-image frames to be processed;
a motion factor determining module 1903, configured to determine a motion factor of the image frame to be processed based on the degree of motion of the image frame to be processed; determining a motion factor of each sub-image frame to be processed based on the motion degrees of the plurality of sub-image frames to be processed;
a pixel value adjusting module 1904, configured to obtain a target pixel value of each pixel point in the image frame to be processed based on the image frame to be processed and the motion factor of each sub-image frame to be processed, and a pixel value of each pixel point in the image frame to be processed; and based on the target pixel value of each pixel point in the image frame to be processed, downwards adjusting the pixel value of each pixel point in the image frame to be processed.
In a possible implementation, the motion level determining module 1902 is specifically configured to:
for each sub-image frame to be processed, respectively executing the following operations:
acquiring a first pixel value of a reference sub-image frame corresponding to the sub-image frame to be processed;
determining a second pixel value of the sub-image frame to be processed based on the pixel values of all pixel points in the sub-image frame to be processed;
determining a pixel value difference between the to-be-processed sub-image frame and the reference sub-image frame based on the first pixel value and the second pixel value;
and determining the motion degree of the sub-image frame to be processed based on the pixel value difference and at least one preset difference threshold value.
In a possible implementation, the motion degree determining module 1902 is specifically configured to:
obtaining an average pixel value of the pixels in the sub-image frame to be processed based on the pixel values of all the pixels in the sub-image frame to be processed and the number of the pixels in the sub-image frame to be processed;
and determining a second pixel value of the sub-image frame to be processed based on the pixel value of the pixel point with the largest pixel value in the sub-image frame to be processed, the pixel value of the pixel point with the smallest pixel value and the average pixel value of the pixel points in the sub-image frame to be processed.
In a possible implementation, the motion level determining module 1902 is specifically configured to:
determining the number of the sub image frames to be processed with changed pixel values based on the pixel value difference values between the plurality of sub image frames to be processed and the plurality of reference sub image frames;
determining the occupation ratio of the sub-image frames to be processed with the changed pixel values based on the number of the sub-image frames to be processed with the changed pixel values and the number of the plurality of sub-image frames to be processed;
and determining the motion degree of the image frame to be processed based on the ratio of the sub image frames to be processed with the changed pixel values and the motion degree of each sub image frame to be processed.
In a possible implementation, the motion factor determining module 1903 is specifically configured to:
determining a motion factor corresponding to an image frame to be processed based on a corresponding relation between a pre-configured motion degree of the image frame to be processed and a motion factor of the image frame to be processed;
and determining a motion factor corresponding to each sub image frame to be processed based on the corresponding relation between the pre-configured motion degree of the sub image frame to be processed and the motion factor of the sub image frame to be processed, and recording the motion factor as a first motion factor.
In a possible implementation, the motion factor determining module 1903 is further configured to:
for each sub-image frame to be processed, respectively executing the following operations:
acquiring a reference motion factor of a reference sub-image frame corresponding to a sub-image frame to be processed in each reference sub-image frame of a previous image frame of the image frame to be processed;
and performing time filtering processing on the sub-image frame to be processed based on the first motion factor corresponding to the sub-image frame to be processed and the reference motion factor, and determining a second motion factor corresponding to the sub-image frame to be processed.
In a possible implementation manner, the pixel value adjusting module 1904 is specifically configured to:
respectively multiplying the motion factor of the image frame to be processed with the second motion factor of each sub-image frame to be processed, and then performing normalization processing to obtain a target motion factor of each sub-image frame to be processed;
carrying out interpolation amplification processing on the target motion factor of each sub-image frame to be processed to obtain the target motion factor of each pixel point in the image frame to be processed;
and determining a target pixel value of each pixel point in the image frame to be processed based on the target motion factor of each pixel point in the image frame to be processed and the pixel value of each pixel point in the image frame to be processed.
In a possible implementation manner, the pixel value adjusting module 1904 is specifically configured to:
determining a step length in a first direction and a step length in a second direction required for amplifying each sub-image frame to be processed to the image frame to be processed based on the size of the image frame to be processed and the number of the sub-image frames to be processed; wherein the first direction and the second direction are perpendicular;
respectively executing the following operations for each pixel point in the image frame to be processed:
determining position information of a sampling point corresponding to the pixel point and a target motion factor corresponding to the sampling point based on the position information of the pixel point, the step length in the first direction and the step length in the second direction; the sampling points are a plurality of pixel points in the neighborhood of the pixel points;
determining weight coefficients respectively corresponding to the pixel points and the sampling points based on the position information of the sampling points corresponding to the pixel points;
and determining the target motion factor of the pixel point based on the weight coefficient respectively corresponding to the pixel point and the sampling point and the target motion factor corresponding to the sampling point.
In a possible implementation, the pixel value adjusting module 1904 is further configured to:
and carrying out low-pass filtering processing on the target motion factor of each pixel point in the image frame to be processed.
In an exemplary embodiment, the present application also provides a computer-readable storage medium comprising instructions, such as a memory comprising instructions, executable by a processor of an electronic device to perform the above-described method of burn-in prevention. Alternatively, the computer readable storage medium may be a non-transitory computer readable storage medium, for example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, comprising a computer program which, when executed by a processor, implements a method of burn-in prevention as provided herein.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method of burn-in prevention, the method comprising:
acquiring a current image frame to be processed, and dividing the image frame to be processed into a plurality of sub image frames to be processed;
determining the motion degrees of the image frame to be processed and the plurality of sub-image frames to be processed based on a plurality of reference sub-image frames contained in at least one reference image frame in a preset time length before the image frame to be processed;
determining a motion factor of the image frame to be processed based on the motion degree of the image frame to be processed; determining a motion factor of each sub image frame to be processed based on the motion degrees of the plurality of sub image frames to be processed;
obtaining a target pixel value of each pixel point in the image frame to be processed based on the image frame to be processed and the motion factors of each sub image frame to be processed and the pixel value of each pixel point in the image frame to be processed;
and based on the target pixel value of each pixel point in the image frame to be processed, downwards adjusting the pixel value of each pixel point in the image frame to be processed.
2. The method of claim 1, wherein the determining the motion degree of the plurality of sub-image frames to be processed based on a plurality of reference sub-image frames each containing at least one reference image frame within a preset time period before the image frame to be processed comprises:
for each sub-image frame to be processed, respectively executing the following operations:
acquiring a first pixel value of a reference sub-image frame corresponding to the sub-image frame to be processed;
determining a second pixel value of the sub-image frame to be processed based on the pixel values of all pixel points in the sub-image frame to be processed;
determining a pixel value difference between the to-be-processed sub-image frame and the reference sub-image frame based on the first pixel value and the second pixel value;
and determining the motion degree of the sub-image frame to be processed based on the pixel value difference and at least one preset difference threshold value.
3. The method of claim 2, wherein the determining a second pixel value of the to-be-processed sub-image frame based on pixel values of all pixel points in the to-be-processed sub-image frame comprises:
obtaining an average pixel value of the pixels in the sub-image frame to be processed based on the pixel values of all the pixels in the sub-image frame to be processed and the number of the pixels in the sub-image frame to be processed;
and determining a second pixel value of the sub-image frame to be processed based on the pixel value of the pixel point with the maximum pixel value in the sub-image frame to be processed, the pixel value of the pixel point with the minimum pixel value and the average pixel value of the pixel points in the sub-image frame to be processed.
4. The method of claim 2, wherein the determining the degree of motion of the image frame to be processed comprises:
determining the number of the sub image frames to be processed with changed pixel values based on the pixel value difference values between the plurality of sub image frames to be processed and the plurality of reference sub image frames;
determining the occupation ratio of the sub image frames to be processed with the changed pixel values based on the number of the sub image frames to be processed with the changed pixel values and the number of the plurality of sub image frames to be processed;
and determining the motion degree of the image frame to be processed based on the ratio of the sub image frames to be processed with the changed pixel values and the motion degree of each sub image frame to be processed.
5. The method according to claim 4, wherein the determining the motion factor of the image frame to be processed based on the degree of motion of the image frame to be processed; and determining a motion factor of each sub-image frame to be processed based on the motion degrees of the plurality of sub-image frames to be processed, including:
determining a motion factor corresponding to the image frame to be processed based on a preset corresponding relation between the motion degree of the image frame to be processed and the motion factor of the image frame to be processed;
and determining a motion factor corresponding to each sub image frame to be processed based on the corresponding relation between the pre-configured motion degree of the sub image frame to be processed and the motion factor of the sub image frame to be processed, and recording the motion factor as a first motion factor.
6. The method of claim 5, further comprising:
for each sub-image frame to be processed, respectively executing the following operations:
acquiring reference motion factors of reference sub-image frames corresponding to the sub-image frames to be processed in each reference sub-image frame of a previous image frame of the image frames to be processed;
and performing time filtering processing on the sub-image frame to be processed based on the first motion factor corresponding to the sub-image frame to be processed and the reference motion factor, and determining a second motion factor corresponding to the sub-image frame to be processed.
7. The method according to claim 6, wherein the determining a target pixel value of each pixel point in the image frame to be processed based on the motion factors of the image frame to be processed and the sub-image frames to be processed and the pixel value of each pixel point in the image frame to be processed comprises:
respectively multiplying the motion factor of the image frame to be processed with the second motion factor of each sub-image frame to be processed, and then performing normalization processing to obtain a target motion factor of each sub-image frame to be processed;
carrying out interpolation amplification processing on the target motion factor of each sub-image frame to be processed to obtain the target motion factor of each pixel point in the image frame to be processed;
and determining a target pixel value of each pixel point in the image frame to be processed based on the target motion factor of each pixel point in the image frame to be processed and the pixel value of each pixel point in the image frame to be processed.
8. The method according to claim 7, wherein the interpolating and amplifying the target motion factor of each sub-image frame to be processed to obtain the target motion factor of each pixel point in the image frame to be processed comprises:
determining a step length in a first direction and a step length in a second direction required for amplifying each sub-image frame to be processed to the image frame to be processed based on the size of the image frame to be processed and the number of the sub-image frames to be processed; wherein the first direction and the second direction are perpendicular;
respectively executing the following operations for each pixel point in the image frame to be processed:
determining position information of a sampling point corresponding to the pixel point and a target motion factor corresponding to the sampling point based on the position information of the pixel point, the step length in the first direction and the step length in the second direction; the sampling points are a plurality of pixel points in the neighborhood of the pixel points;
determining weight coefficients respectively corresponding to the pixel points and the sampling points based on the position information of the sampling points corresponding to the pixel points;
and determining the target motion factor of the pixel point based on the weight coefficient respectively corresponding to the pixel point and the sampling point and the target motion factor corresponding to the sampling point.
9. A display device, characterized in that the display device comprises:
a display interface, a processor and a memory;
the display interface is used for displaying the image frame to be processed and a plurality of sub-image frames to be processed;
the memory to store the processor-executable instructions;
the processor configured to execute the instructions to implement the burn-in prevention method of any one of claims 1-8.
10. A burn-in prevention apparatus, the apparatus comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a current image frame to be processed and dividing the image frame to be processed into a plurality of sub image frames to be processed;
the motion degree determining module is used for determining the motion degrees of the image frame to be processed and the plurality of sub-image frames to be processed based on a plurality of reference sub-image frames contained in at least one reference image frame in a preset time length before the image frame to be processed;
the motion factor determination module is used for determining the motion factor of the image frame to be processed based on the motion degree of the image frame to be processed; determining a motion factor of each sub-image frame to be processed based on the motion degrees of the plurality of sub-image frames to be processed;
the pixel value adjusting module is used for obtaining a target pixel value of each pixel point in the image frame to be processed based on the image frame to be processed, the motion factors of each sub-image frame to be processed and the pixel value of each pixel point in the image frame to be processed; and based on the target pixel value of each pixel point in the image frame to be processed, downwards adjusting the pixel value of each pixel point in the image frame to be processed.
CN202211658636.6A 2022-12-22 2022-12-22 Screen burn-in prevention method, device and equipment Pending CN115829841A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211658636.6A CN115829841A (en) 2022-12-22 2022-12-22 Screen burn-in prevention method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211658636.6A CN115829841A (en) 2022-12-22 2022-12-22 Screen burn-in prevention method, device and equipment

Publications (1)

Publication Number Publication Date
CN115829841A true CN115829841A (en) 2023-03-21

Family

ID=85517802

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211658636.6A Pending CN115829841A (en) 2022-12-22 2022-12-22 Screen burn-in prevention method, device and equipment

Country Status (1)

Country Link
CN (1) CN115829841A (en)

Similar Documents

Publication Publication Date Title
WO2021212463A1 (en) Display device and screen projection method
CN112214189A (en) Image display method and display device
CN111899175A (en) Image conversion method and display device
CN111031375A (en) Method for skipping detailed page of boot animation and display equipment
CN111176603A (en) Image display method for display equipment and display equipment
CN112118400A (en) Display method of image on display device and display device
US11662971B2 (en) Display apparatus and cast method
CN111954059A (en) Screen saver display method and display device
CN111757024A (en) Method for controlling intelligent image mode switching and display equipment
CN112783380A (en) Display apparatus and method
CN111954043B (en) Information bar display method and display equipment
WO2021179362A1 (en) Display apparatus and interface switching method
WO2021179361A1 (en) Display apparatus
WO2021120419A1 (en) User interface display method and device
CN114079819A (en) Content display method and display equipment
CN114430492A (en) Display device, mobile terminal and picture synchronous zooming method
CN111078926A (en) Method for determining portrait thumbnail image and display equipment
CN112218156B (en) Method for adjusting video dynamic contrast and display equipment
CN111445427B (en) Video image processing method and display device
CN115829841A (en) Screen burn-in prevention method, device and equipment
WO2021196432A1 (en) Display method and display device for content corresponding to control
CN115547265A (en) Display apparatus and display method
CN111949179A (en) Control amplifying method and display device
CN113038215A (en) Image display brightness and color difference adjusting method and display equipment
CN112367550A (en) Method for realizing multi-title dynamic display of media asset list and display equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination