CN114217691B - Display driving method and device, electronic equipment and intelligent display system - Google Patents

Display driving method and device, electronic equipment and intelligent display system Download PDF

Info

Publication number
CN114217691B
CN114217691B CN202111521857.4A CN202111521857A CN114217691B CN 114217691 B CN114217691 B CN 114217691B CN 202111521857 A CN202111521857 A CN 202111521857A CN 114217691 B CN114217691 B CN 114217691B
Authority
CN
China
Prior art keywords
pixel
information
display
determining
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111521857.4A
Other languages
Chinese (zh)
Other versions
CN114217691A (en
Inventor
孙高明
朱文涛
毕育欣
段欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202111521857.4A priority Critical patent/CN114217691B/en
Publication of CN114217691A publication Critical patent/CN114217691A/en
Application granted granted Critical
Publication of CN114217691B publication Critical patent/CN114217691B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/2007Display of intermediate tones
    • G09G3/2074Display of intermediate tones using sub-pixels

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)

Abstract

The embodiment of the invention provides a display driving method, a display driving device, electronic equipment and an intelligent display system. Wherein the method comprises the following steps: determining a field of view area of a user on a display as a noted area; scanning each pixel row in the light-injection area in the display in turn; and simultaneously scanning all pixel rows in the pixel groups for each pixel group which is not in the light-injection area in the display in sequence, wherein each pixel group comprises N adjacent pixel rows, and N is a positive integer greater than 1. The pixels can be scanned in different modes for the noted region and the non-noted region, so that the image quality of the noted region is ensured while the image quality of the non-noted region is compressed, namely, the definition of different regions on the display is adjusted according to the field of view of a user.

Description

Display driving method and device, electronic equipment and intelligent display system
Technical Field
The present invention relates to the field of display technologies, and in particular, to a display driving method, a device, an electronic apparatus, and an intelligent display system.
Background
In some application scenarios, such as light field display, it is desirable to have images in the user's field of view displayed with a relatively high definition, while images outside the field of view are displayed with a relatively low definition, i.e. the definition of the different areas on the display needs to be adjusted according to the user's field of view. However, in the prior art, the sharpness of each area of the display is always consistent, so that the sharpness of different areas on the display cannot be adjusted according to the field of view of the user.
Therefore, how to adjust the definition of different areas on the display according to the field of view of the user is called a technical problem to be solved.
Disclosure of Invention
The embodiment of the invention aims to provide a display driving method, a display driving device, electronic equipment and an intelligent display system, so as to realize the adjustment of the definition of different areas on a display according to the field of view of a user. The specific technical scheme is as follows:
in a first aspect of an embodiment of the present invention, there is provided a display driving method, the method including:
determining a field of view area of a user on a display as a noted area;
scanning each pixel row in the light-injection area in the display in turn;
and simultaneously scanning all pixel rows in the pixel groups for each pixel group which is not in the light-injection area in the display in sequence, wherein each pixel group comprises N adjacent pixel rows, and N is a positive integer greater than 1.
In one possible embodiment, the determining the field of view of the user on the display as the noted region includes:
determining codes corresponding to pixel values in preset information rows in an image to be displayed according to the preset corresponding relation between the codes and the pixel values to obtain coding information, wherein the coding information comprises position sub-information, and a decoding device for decoding the preset information rows to obtain the image to be displayed adds the decoding device to the image to be displayed according to the coding information and the corresponding relation;
And determining the area represented by the position sub-information to obtain a light-injection area.
In a possible embodiment, the location sub-information is obtained by:
determining the position and azimuth angle of eyes of a user relative to a display according to a shot human eye image of the user;
determining a field of view of the user on a display based on the location and the azimuth;
position sub-information representing the field of view is generated.
In a possible embodiment, the encoded information further includes one or more of region adjustment sub-information, compression mode sub-information;
the method further comprises the steps of:
determining N according to the compression mode represented by the compression mode sub-information;
the determining the region represented by the position sub-information to obtain a noted-in region comprises the following steps:
and adjusting the region represented by the position sub-information according to the region adjustment sub-information to obtain a light-injection region.
In a possible embodiment, the scanning sequentially each pixel row in the display in the view-field area includes:
determining a first scanning sequence for sequentially scanning each first pixel in the light-injection area when each pixel line in the light-injection area is positioned in the display;
Determining a first opening sequence of a first switch for controlling each first pixel according to the pixel island to which each first pixel belongs and the first scanning sequence;
sequentially starting a first switch in each pixel island according to the first starting sequence;
the scanning all pixel rows in the pixel group simultaneously for each pixel group in the display which is not in the noted area comprises:
determining a second scanning sequence of second pixels which are not in the light-injection area when all the pixel rows in the pixel groups are scanned simultaneously for each pixel group which is not in the light-injection area in the display in sequence;
determining a second opening sequence of a second switch for controlling each second pixel according to the pixel island to which each second pixel belongs and the second scanning sequence;
and sequentially starting a second switch in each pixel island according to the second starting sequence.
In a second aspect of embodiments of the present invention, there is provided a display driving apparatus, the apparatus comprising:
the noted-area determining module is used for determining a field-of-view area of a user on the display as a noted area;
the first scanning module is used for scanning each pixel row in the light-injection area in the display in sequence;
And the second scanning module is used for scanning all pixel rows in the pixel groups simultaneously for each pixel group which is not in the light-injection area in the display in sequence, wherein each pixel group comprises N adjacent pixel rows, and N is a positive integer greater than 1.
In one possible embodiment, the gaze area determining module determines a field of view area of the user on the display as the gaze area, including:
determining codes corresponding to pixel values in preset information rows in an image to be displayed according to the preset corresponding relation between the codes and the pixel values to obtain coding information, wherein the coding information comprises position sub-information, and a decoding device for decoding the preset information rows to obtain the image to be displayed adds the decoding device to the image to be displayed according to the coding information and the corresponding relation;
and determining the area represented by the position sub-information to obtain a light-injection area.
In a possible embodiment, the location sub-information is obtained by:
determining the position and azimuth angle of eyes of a user relative to a display according to a shot human eye image of the user;
determining a field of view of the user on a display based on the location and the azimuth;
Position sub-information representing the field of view is generated.
In a possible embodiment, the encoded information further includes one or more of region adjustment sub-information, compression mode sub-information;
the first scanning module is further configured to determine N according to the compression mode represented by the compression mode sub-information;
the gazing area determining module determines the area represented by the position sub-information to obtain a gazing area, which comprises the following steps:
and adjusting the region represented by the position sub-information according to the region adjustment sub-information to obtain a light-injection region.
In one possible embodiment, the first scanning module scans each pixel row in the noted area in the display in turn, including:
determining a first scanning sequence for sequentially scanning each first pixel in the light-injection area when each pixel line in the light-injection area is positioned in the display;
determining a first opening sequence of a first switch for controlling each first pixel according to the pixel island to which each first pixel belongs and the first scanning sequence;
sequentially starting a first switch in each pixel island according to the first starting sequence;
the second scanning module scans all pixel rows in the pixel groups simultaneously for each pixel group which is not in the light-injection area in the display in sequence, and the second scanning module comprises the following steps:
Determining a second scanning sequence of second pixels which are not in the light-injection area when all the pixel rows in the pixel groups are scanned simultaneously for each pixel group which is not in the light-injection area in the display in sequence;
determining a second opening sequence of a second switch for controlling each second pixel according to the pixel island to which each second pixel belongs and the second scanning sequence;
and sequentially starting a second switch in each pixel island according to the second starting sequence.
In a third aspect of the embodiments of the present invention, there is provided an electronic device including a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory perform communication with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of the above first aspects when executing a program stored on a memory.
In a fourth aspect of the embodiment of the present invention, there is provided an intelligent display system, including: a host and a display;
the host comprises image acquisition equipment and a processor, and the display comprises a panel and a Field Programmable Gate Array (FPGA);
The image acquisition equipment is used for shooting human eye images of users;
the processor is used for determining the visual field area of the user on the display according to the photographed human eye image;
the FPGA is used for acquiring a visual field area determined by the processor and used as a light-injection area; scanning each pixel row in the noted area in the panel in turn; sequentially aiming at each pixel group which is not in the light-injection area in the panel, and simultaneously scanning all pixel rows in the pixel groups, wherein each pixel group comprises N adjacent pixel rows, and N is a positive integer greater than 1;
the panel is used for displaying the image to be displayed under the drive of the FPGA.
In a fifth aspect of the embodiments of the present invention, there is provided a computer readable storage medium having stored therein a computer program which when executed by a processor implements the method steps of any of the second aspects described above.
The embodiment of the invention has the beneficial effects that:
according to the display driving method, the device, the electronic equipment and the intelligent display system provided by the embodiment of the invention, the scanning time sequence of each pixel row of the display can be adjusted according to the gazing area, and as each pixel row in the gazing area is scanned in sequence, the image data input by scanning different pixel rows are different, namely, each pixel row in the gazing area displays different images, and for each pixel row not in the gazing area, N rows in each pixel group are scanned simultaneously, so that the image data input by scanning N pixel rows in the same pixel group are the same, namely, N pixel rows in the same pixel group display the same image, therefore, the gazing area can only display 1/N image data when displaying the image, and the gazing area can display complete image data, namely, the image definition displayed in the gazing area is lower, and the image definition displayed in the gazing area is higher, thereby realizing the adjustment of the definition of different areas on the display according to the field of view of a user.
Of course, it is not necessary for any one product or method of practicing the invention to achieve all of the advantages set forth above at the same time.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other embodiments may be obtained according to these drawings to those skilled in the art.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a display driving method according to an embodiment of the invention;
FIG. 3a is a schematic diagram illustrating a scanning timing of a pixel row in a noted area according to an embodiment of the present invention;
FIG. 3b is a schematic diagram of a scanning timing diagram of a pixel row not in a light-injection region according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an implementation of S202 according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an implementation of S203 according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a pixel island structure according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a display driving device according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a structure of an intelligent display system according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a structure of an FPGA according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, those of ordinary skill in the art will be able to devise all other embodiments that are obtained based on this application and are within the scope of the present invention.
In order to more clearly describe the display driving method provided by the embodiment of the present invention, an execution subject of the display driving method provided by the embodiment of the present invention will be described below. The display driving method provided by the invention can be applied to any electronic equipment with display driving capability, and the electronic equipment can be integrated in a display or independent of the display. For convenience of description, the electronic device is illustrated as being integrated in a display.
In one possible embodiment, the electronic device is an FPGA (Field Programmable Gate Array ) integrated inside the display, illustratively, as shown in fig. 1, connected to the host computer via a DP interface and to a panel in the display.
The host is used for sending image data used for representing an image to be displayed to the FPGA, and the FPGA drives the panel to display the image data. In light field display, the definition of different areas on the display needs to be adjusted according to the field of view of the user, but in the related art, the overall definition of the display is often only adjustable.
Based on this, an embodiment of the present invention provides a display driving method, as shown in fig. 2, including:
s201, determining a field of view area of a user on a display as a noted area.
S202, scanning each pixel row in the light-injection area in the display in turn.
S203, scanning all pixel rows in the pixel groups simultaneously for each pixel group which is not in a light-injection area in the display in sequence, wherein each pixel group comprises N adjacent pixel rows, and N is a positive integer greater than 1.
In this embodiment, the scanning timing of each pixel row of the display may be adjusted according to the viewing area, and since each pixel row in the viewing area is scanned sequentially, the image data input by scanning different pixel rows is different, that is, each pixel row in the viewing area displays different images, and for each pixel row not in the viewing area, N rows in each pixel group are scanned simultaneously, so that the image data input by scanning N pixel rows in the same pixel group is the same, that is, N pixel rows in the same pixel group display the same image, so that the non-viewing area of the display can only display 1/N image data when displaying images, and the viewing area can display complete image data, that is, the image definition displayed in the non-viewing area is lower, and the image definition displayed in the viewing area is higher, thereby realizing adjustment of the definition of different areas on the display according to the user's field of view.
The foregoing S201 to S203 will be described below, respectively:
in S201, the position of the noted area may be calculated by the execution subject, or may be calculated by another device and then sent to the execution subject. By way of example, taking the architecture shown in fig. 1 as an example, the host computer collects human eye data of a user through a sensor connected to the host computer, and can calculate the position of the light-injection area according to the collected human eye data, and send information for representing the position of the light-injection area to the FPGA, and the FPGA determines the light-injection area by analyzing the information. The host computer can also send the acquired human eye data to the FPGA so that the FPGA can determine the eye-injecting area according to the human eye data. The sensor may be any sensor capable of acquiring the position or azimuth of the human eye, such as an image acquisition device. In one possible embodiment, the host computer is externally connected with an image acquisition device, the image acquisition device is used for shooting an image of a human eye and sending the image to the host computer, the host computer determines the position and azimuth angle of eyes of a user relative to the display according to the shot image of the human eye, and determines the field of view area of the user on the display according to the position and the azimuth angle, so that position sub-information for representing the field of view area is generated.
In S202, if at least one pixel in a pixel row is in a viewing zone, the pixel row is in the viewing zone, and if all pixels in the pixel row are not in the viewing zone, the pixel row is not in the viewing zone. Sequential scanning refers to: at the same time, at most only one of the same-order pixels in each pixel row is in an on state. For example, assuming that a total of four pixel rows are in the noted area, the first pixel row, the second pixel row, the third pixel row, and the fourth pixel row are respectively denoted as a first pixel row, and if a first pixel in the first pixel row is in an on state within t=0 to t=1, a first pixel in the second pixel row, the third pixel row, and the fourth pixel row is in an off state within t=0 to t=1. The scanning timing of the first pixel row, the second pixel row, the third pixel row and the fourth pixel row can be seen in fig. 3a.
In S203, simultaneous scanning means: the same-order pixels in each pixel row are in an on state in the same period. For example, assuming that a total of four pixel rows are not in the noted area, respectively denoted as a fifth pixel row, a sixth pixel row, a seventh pixel row, and an eighth pixel row, if the first pixel in the fifth pixel row is in an on state within t=0 to t=1, then the first pixel in the sixth pixel row, the seventh pixel row, and the eighth pixel row are also in an on state within t=0 to t=1. The scanning timing of the fifth pixel row, the sixth pixel row, the seventh pixel row and the eighth pixel row can be seen in fig. 3b.
The value of N may be different according to the application scenario, such as 3, 4, 5, 8, 16, etc., which is not limited in this embodiment. It will be appreciated that, since the pixel rows in the same pixel group are scanned simultaneously, the pixels located in the same order in the pixel rows in the same pixel group are turned on simultaneously, so that the pixels located in the same order will display the same image data, and therefore the image data displayed by the pixel rows in the same pixel group are the same, and therefore the image displayed in the non-gazing area is compressed to 1/N in the row dimension, and therefore the image displayed in the non-gazing area has lower definition.
As described above, in some embodiments, the position of the view-point region is calculated by the host computer, and information (hereinafter referred to as position sub-information) indicating the position of the view-point region is transmitted to the execution subject. The location sub information may be data in different forms according to the connection between the host and the execution body, but should be data that can be transferred by the connection between the host and the execution body.
Illustratively, the host and the execution body are connected through a DP (display interface) interface, and the DP interface is capable of transmitting image data, so that the position sub-information is represented in the form of an image. How information is transferred between the host and the execution subject through the image data will be described below.
In one possible embodiment, the host adds a preset information row to the image to be displayed, where the preset information row includes one or more rows of pixels, the preset information row is used to represent the encoded information, the pixel value of each pixel in the preset information row corresponds to an M-bit code in the encoded information, M is any positive integer, an xth pixel in the preset information row corresponds to a code from (x-1) th m+1 th bit to xth M th bit in the encoded information, x is any positive integer ranging from 1 to L, and L is the total number of pixels in the preset information row.
For convenience of description, m=2 is taken as an example, and for the cases of m=1 and M > 2, the principle is the same, and will not be repeated here. In the case of m=2, each pixel corresponds to two bits in the encoded information, in the case where the encoded information is represented in binary form, there are four cases where two bits encoding is "00", "01", "10", "11", four colors are respectively corresponding to the four cases in advance, assuming that the first color corresponds to "00", the second color corresponds to "01", the third color corresponds to "10", the fourth color corresponds to "11", if the encoding corresponding to a pixel in the preset information line is "00", the host sets the pixel value of the pixel to the first color, and if the encoding corresponding to a pixel in the preset information line is "01", the host sets the pixel value of the pixel to the second color, and so on. For example, assuming that the first bit is 0 and the second bit is 0 in the encoded information, the corresponding code of the first pixel in the preset pixel row is "00", so that the host sets the pixel value of the first pixel in the preset pixel row to the first color.
The first, second, third and fourth colors may be any four colors, but the color difference between any two colors should be as large as possible, and illustratively, in one possible embodiment, the first, second, third and fourth colors may be black, blue, red and white, respectively.
After receiving the image to be displayed, which is added with the preset information row, the execution main body determines the codes corresponding to the pixel values in the preset information row according to the corresponding relation between the preset codes and the pixel values, and obtains the coding information, so that the information is transferred between the host and the execution main body through the image data.
For example, if the first pixel in the preset information row is the first color, the executing body may determine that the first bit code in the encoded information is "0" and the second bit code is "0", if the second pixel in the preset information row is the second color, the executing body may determine that the third bit code in the encoded information is "0" and the fourth bit code is "1", and so on, and the executing body determines that the bit code in the encoded information, that is, determines the encoded information.
According to the length of the encoded information and the number of pixels included in the preset information line, the execution body may determine the encoding corresponding to all the pixel values in the preset information line, or may determine only the encoding corresponding to the pixel values in the preset information line. For example, assuming that 1920 pixels are included in the preset information line and the length of the encoded information is 80 bits, the execution body can determine the encoded information by determining only the codes corresponding to the first 40 pixels in the preset information line without determining the codes corresponding to the remaining 1880 pixels.
The encoded information should include at least position sub-information such that the executing body determines the view-injecting section based on the position sub-information. According to actual requirements, the coded information can also comprise other information besides the position sub-information. Illustratively, the encoded information may also include one or more of the following:
region adjustment sub-information, compression scheme sub-information.
The compression mode sub-information is used for representing a compression mode, and the execution main body determines the value of N according to the compression mode represented by the compression mode sub-information. As described above, since the non-gazing area can display only 1/N of image data when displaying an image, it is equivalent to compressing the image data to 1/N, and therefore, if the compression method indicated by the compression method sub-information is one-half compression, the execution body determines that the value of N is 2, and if the compression method indicated by the compression method sub-information is one-fourth compression, the execution body determines that the value of N is 4, and so on.
The region adjustment sub-information is used for indicating an adjustment mode, and the execution body adjusts the region indicated by the position sub-information according to the adjustment mode indicated by the region adjustment sub-information when determining the gaze area, so as to obtain an adjusted region serving as the gaze area.
It will be appreciated that in some applications, the field of view may change slightly due to fine adjustment of the pose of the human eye, for example, the field of view will expand by a certain size due to the backward movement of the human eye position. Although the area represented by the updated position sub-information may be made to coincide with the changed gaze area by updating the position sub-information, it is necessary to re-determine the position sub-information to occupy a certain system resource. For example, assuming that the position sub-information is expressed in the form of vertex coordinates of two vertices on the diagonal of the gaze area, the re-determination of the position sub-information requires the determination of vertex coordinates of two vertices on the diagonal of the gaze area after the change. The host can determine the change mode of the gazing area according to the adjustment mode of the human eye pose, and generates the region adjustment sub-information for representing the change mode, so that the execution main body adjusts the original gazing area into the gazing area after the change according to the region adjustment sub-information. Since the location sub-information does not need to be re-determined, relatively little system resources are occupied.
In addition to the foregoing position sub-information, mode sub-information, area adjustment sub-information, and compression sub-information, other information may be included in the encoded information according to different actual requirements, which is not limited in this embodiment. Illustratively, in one possible embodiment, the information represented by each bit of the encoded information is as shown in table 1:
TABLE 1 meaning of bits of encoded information
Taking the second example of the behavior in table 1, wherein [79:64] represents that 79 bits to 64 bits in the encoded information are encoded as one piece of sub information, 16 represents that the length of the sub information is 16 bits, and the flag bit represents that the sub information is used for representing the flag bit, and the third to ninth rows are the same. [63:56] is encoded as the compression mode sub-information, [39:8] is encoded as the position sub-information, [7:0] is the region adjustment sub-information,
the [63:56] code is shown in Table 2:
bit position Encoding Information represented
[63:62] 00 Sequential scanning
[63:62] 01 Non-sequential scanning
[61:60] 00 Not compressed
[61:60] 01 One-half compression
[61:60] 10 Quarter compression
[61:60] 11 Eighth compression
[59:56] 0000 Reservation
TABLE 2 meaning of bits of compressed form sub-information
Wherein the second row of Table 2 indicates that the scan order is sequential scan when the code of [63:62] is "00", the third row indicates that the scan order is non-sequential scan when the code of [63:62] is "01", the fourth row indicates that the compression mode is not compressed when the code of [61:60] is "00", the fifth row indicates that the compression mode is one-half compression when the code of [61:60] is "01", and so on.
Sequential scanning refers to scanning in the order of the row coordinates of each pixel row, and non-sequential scanning refers to scanning first the pixel row in the noted region and then scanning the pixel row not in the noted region. For example, assume that there are four pixel rows, respectively denoted as a first pixel row, a second pixel row, a third pixel row, and a fourth pixel row, where the second pixel row and the third pixel row are in the noted region, the first pixel row and the fourth pixel row are not in the noted region, and the order of the horizontal coordinates of the pixel rows is: first pixel row→second pixel row→third pixel row→fourth pixel row. If the scanning is performed sequentially, the scanning sequence is first pixel row, second pixel row, third pixel row and fourth pixel row, and if the scanning is not performed sequentially, the scanning sequence is: second pixel row→third pixel row→first pixel row→fourth pixel row.
In some application scenarios, to more conveniently control each pixel in a display, the pixels in the display are divided into a plurality of pixel islands, each pixel island including a number of pixels. In these application scenarios, the foregoing S202 may be implemented as shown in fig. 4:
s2021, determining a first scanning sequence for sequentially scanning each first pixel in the light-injection area when each pixel line in the light-injection area in the display.
For example, assuming that a total of two pixel rows are in the noted area, denoted as a first pixel row and a second pixel row, respectively, the first scan order is: a first pixel of a first pixel row, a first pixel of a second pixel row, a second pixel of the first pixel row, a second pixel of the second pixel row, and so on.
S2022, determining a first turn-on sequence of the first switch for controlling each first pixel according to the pixel island to which the first pixel belongs and the first scan sequence.
The division of the pixel islands may vary according to the application scene, and illustratively, in one possible embodiment, every 11 pixels may be divided into one pixel island. Each pixel is controlled by a switch and different pixels can be controlled by the same switch, as illustrated by way of example in fig. 5. In fig. 6, there are two pixel islands, each pixel island includes 11 pixels, where the 1 st, 3 rd, 5 th, 7 th, 9 th, 11 th pixels in the first pixel island and the 2 nd, 4 th, 6 th, 8 th, 10 th pixels in the second pixel island are controlled by a MUX (multiplexer) 1 switch, and the 2 nd, 4 th, 6 th, 8 th, 10 th pixels in the first pixel island and the 1 st, 3 rd, 5 th, 7 th, 9 th, 11 th pixels in the second pixel island are controlled by a MUX2 switch.
S2023 turns on the first switches of the pixel islands in sequence in the first turn-on order.
Because the first opening sequence is determined according to the first scanning sequence, the first switches of the pixel islands are sequentially opened according to the first opening sequence, so that the first pixels are opened according to the first scanning sequence, and each pixel row in the light-injection area in the display is sequentially scanned.
Like the pixel rows in the noted-in region, the aforementioned S203 can be implemented by the way shown in fig. 5 for the pixel rows not in the noted-in region:
s2031, determining a second scan order for each second pixel in the display that is not in the field of view when all pixels in the pixel group are simultaneously scanned for each pixel group in the display that is not in the field of view.
S2032, determining a second turn-on sequence for controlling the second switch of each second pixel according to the pixel island to which each second pixel belongs and the second scanning sequence.
S2033, sequentially turning on the second switches in each pixel island according to the second turn-on sequence.
By adopting the embodiment, the scanning of the pixel rows can be converted into the control of each pixel in the pixel islands through pixel rearrangement, so that the scanning accuracy is effectively improved.
Corresponding to the foregoing display driving method, an embodiment of the present invention further provides a display driving device, as shown in fig. 7, including:
a view-field determining module 701, configured to determine a field of view of a user on a display as a view field;
a first scanning module 702, configured to sequentially scan each pixel row in the noted area in the display;
a second scanning module 703, configured to sequentially scan all pixel rows in the pixel groups for each pixel group in the display that is not in the noted area, where each pixel group includes N adjacent pixel rows, and N is a positive integer greater than 1.
In one possible embodiment, the gaze area determining module determines a field of view area of the user on the display as the gaze area, including:
determining codes corresponding to pixel values in preset information rows in an image to be displayed according to the preset corresponding relation between the codes and the pixel values to obtain coding information, wherein the coding information comprises position sub-information, and a decoding device for decoding the preset information rows to obtain the image to be displayed adds the decoding device to the image to be displayed according to the coding information and the corresponding relation;
And determining the area represented by the position sub-information to obtain a light-injection area.
In a possible embodiment, the location sub-information is obtained by:
determining the position and azimuth angle of eyes of a user relative to a display according to a shot human eye image of the user;
determining a field of view of the user on a display based on the location and the azimuth;
position sub-information representing the field of view is generated.
In a possible embodiment, the encoded information further includes one or more of region adjustment sub-information, compression mode sub-information;
the first scanning module is further configured to determine N according to the compression mode represented by the compression mode sub-information;
the gazing area determining module determines the area represented by the position sub-information to obtain a gazing area, which comprises the following steps:
and adjusting the region represented by the position sub-information according to the region adjustment sub-information to obtain a light-injection region.
In one possible embodiment, the first scanning module scans each pixel row in the noted area in the display in turn, including:
determining a first scanning sequence for sequentially scanning each first pixel in the light-injection area when each pixel line in the light-injection area is positioned in the display;
Determining a first opening sequence of a first switch for controlling each first pixel according to the pixel island to which each first pixel belongs and the first scanning sequence;
sequentially starting a first switch in each pixel island according to the first starting sequence;
the second scanning module scans all pixel rows in the pixel groups simultaneously for each pixel group which is not in the light-injection area in the display in sequence, and the second scanning module comprises the following steps:
determining a second scanning sequence of second pixels which are not in the light-injection area when all the pixel rows in the pixel groups are scanned simultaneously for each pixel group which is not in the light-injection area in the display in sequence;
determining a second opening sequence of a second switch for controlling each second pixel according to the pixel island to which each second pixel belongs and the second scanning sequence;
and sequentially starting a second switch in each pixel island according to the second starting sequence.
Referring to fig. 8, an embodiment of the present invention further provides an intelligent display system, including:
host 810, display 820.
The host 810 includes an image capture device 811 and a processor 812, and the display 820 includes a panel 822 and a field programmable gate array FPGA821;
The image acquisition device 811 is used for shooting human eye images of a user;
the processor 812 is configured to determine a field of view of the user on the display according to the captured human eye image;
the FPGA821 is configured to obtain a field of view area determined by the processor, as a noted area; scanning each row of pixels in the field-of-view region in turn in the panel 822; simultaneously scanning all pixel rows in the pixel groups for each pixel group in the panel 822 that is not in the noted region, wherein each pixel group includes N adjacent pixel rows, N being a positive integer greater than 1;
the panel 822 is used to display an image to be displayed under the driving of the FPGA 821.
The foregoing processor 812 may refer to a CPU and/or a GPU in the host 810, and in an exemplary embodiment, the foregoing processor 812 includes a CPU and a GPU, where the CPU is configured to determine, according to a captured human eye image, a field of view area of the user on the display, and the GPU is configured to add, in an image to be displayed, a preset information row for representing coding information, where the coding information includes location sub-information for representing a location of the field of view area. For the encoded information and the predetermined information row, reference may be made to the foregoing related description, and the description is omitted herein.
The architecture of the FGPA821 may include a mode control module 8211, a GOA (Gatedrive On Array, array substrate row driving) timing module 8212, a MUX timing module 8213, an image compression module 8214, a data rearrangement module 8215, and a CEDS (Clock Embedded Differential Singling, clock embedded differential signaling) module 8216, as shown in fig. 9.
The mode control module 8211 is configured to switch a signal source, and may switch the signal source to a DP signal source or may switch the signal source to a BIST signal source, for example. The GOA timing module 8212 is configured to control GOA timing of the panel 822, and the MUX timing module 8213 is configured to control panel MUX timing, and together with the GOA timing module 8212, realize scanning of each pixel on the panel. The image compression module 8214 is used for compressing image data, and the data rearrangement module 8215 is used for implementing the steps of S2021-S2023 and S2031-S2033 described above. The CEDS module 8216 is configured to send compressed image data to the panel 822, such that the panel 822 displays the image data under the drive of the FPGA 821.
The embodiment of the invention also provides an electronic device, as shown in fig. 10, which comprises a processor 1001, a communication interface 1002, a memory 1003 and a communication bus 1004, wherein the processor 1001, the communication interface 1002 and the memory 1003 complete communication with each other through the communication bus 1004,
A memory 1003 for storing a computer program;
the processor 1001 is configured to execute a program stored in the memory 1003, and implement the following steps:
determining a field of view area of a user on a display as a noted area;
scanning each pixel row in the light-injection area in the display in turn;
and simultaneously scanning all pixel rows in the pixel groups for each pixel group which is not in the light-injection area in the display in sequence, wherein each pixel group comprises N adjacent pixel rows, and N is a positive integer greater than 1.
The communication bus mentioned above for the electronic devices may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In yet another embodiment of the present invention, there is also provided a computer readable storage medium having stored therein a computer program which, when executed by a processor, implements the steps of any of the display driving methods described above.
In yet another embodiment of the present invention, there is also provided a computer program product containing instructions that, when run on a computer, cause the computer to perform any of the display driving methods of the above embodiments.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for embodiments of the apparatus, electronic device, system, computer-readable storage medium, and computer program product, the description is relatively simple, as it is substantially similar to the method embodiments, with reference to the section descriptions of the method embodiments being merely illustrative.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (11)

1. A method of driving a display, the method comprising:
determining a field of view area of a user on a display as a noted area;
determining a first scanning sequence for sequentially scanning each first pixel in the light-injection area when each pixel line in the light-injection area is positioned in the display;
determining a first opening sequence of a first switch for controlling each first pixel according to the pixel island to which each first pixel belongs and the first scanning sequence;
sequentially starting a first switch in each pixel island according to the first starting sequence;
determining a second scanning sequence of second pixels which are not in the light-injection area when all the pixel rows in the pixel groups are scanned simultaneously for each pixel group which is not in the light-injection area in the display in sequence;
determining a second opening sequence of a second switch for controlling each second pixel according to the pixel island to which each second pixel belongs and the second scanning sequence;
Sequentially starting a second switch in each pixel island according to the second starting sequence; wherein each pixel group comprises N adjacent pixel rows, and N is a positive integer greater than 1.
2. The method of claim 1, wherein the determining the field of view of the user on the display as the gaze area comprises:
determining codes corresponding to pixel values in preset information rows in an image to be displayed according to the preset corresponding relation between the codes and the pixel values to obtain coding information, wherein the coding information comprises position sub-information, and a decoding device for decoding the preset information rows to obtain the image to be displayed adds the decoding device to the image to be displayed according to the coding information and the corresponding relation;
and determining the area represented by the position sub-information to obtain a light-injection area.
3. The method of claim 2, wherein the location sub-information is obtained by:
determining the position and azimuth angle of eyes of a user relative to a display according to a shot human eye image of the user;
determining a field of view of the user on a display based on the location and the azimuth;
Position sub-information representing the field of view is generated.
4. The method of claim 2, wherein the encoded information further comprises one or more of region adjustment sub-information, compression mode sub-information;
the method further comprises the steps of:
determining N according to the compression mode represented by the compression mode sub-information;
the determining the region represented by the position sub-information to obtain a noted-in region comprises the following steps:
and adjusting the region represented by the position sub-information according to the region adjustment sub-information to obtain a light-injection region.
5. A display driving apparatus, the apparatus comprising:
the noted-area determining module is used for determining a field-of-view area of a user on the display as a noted area;
the first scanning module is used for determining a first scanning sequence for sequentially scanning each first pixel positioned in the light-injection area when each pixel row positioned in the light-injection area in the display; determining a first opening sequence of a first switch for controlling each first pixel according to the pixel island to which each first pixel belongs and the first scanning sequence; sequentially starting a first switch in each pixel island according to the first starting sequence;
A second scanning module, configured to determine a second scanning order of sequentially scanning, for each pixel group in the display that is not in the view-injection region, all pixels in the pixel group simultaneously, each second pixel that is not in the view-injection region; determining a second opening sequence of a second switch for controlling each second pixel according to the pixel island to which each second pixel belongs and the second scanning sequence; and sequentially starting a second switch in each pixel island according to the second starting sequence, wherein each pixel group comprises N adjacent pixel rows, and N is a positive integer greater than 1.
6. The apparatus of claim 5, wherein the gaze area determination module determines a field of view of a user on a display as a gaze area, comprising:
determining codes corresponding to pixel values in preset information rows in an image to be displayed according to the preset corresponding relation between the codes and the pixel values to obtain coding information, wherein the coding information comprises position sub-information, and a decoding device for decoding the preset information rows to obtain the image to be displayed adds the decoding device to the image to be displayed according to the coding information and the corresponding relation;
And determining the area represented by the position sub-information to obtain a light-injection area.
7. The apparatus of claim 6, wherein the location sub-information is obtained by:
determining the position and azimuth angle of eyes of a user relative to a display according to a shot human eye image of the user;
determining a field of view of the user on a display based on the location and the azimuth;
position sub-information representing the field of view is generated.
8. The apparatus of claim 6, wherein the encoded information further comprises one or more of region adjustment sub-information, compression mode sub-information;
the first scanning module is further configured to determine N according to the compression mode represented by the compression mode sub-information;
the gazing area determining module determines the area represented by the position sub-information to obtain a gazing area, which comprises the following steps:
and adjusting the region represented by the position sub-information according to the region adjustment sub-information to obtain a light-injection region.
9. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
A memory for storing a computer program;
a processor for carrying out the method steps of any one of claims 1-4 when executing a program stored on a memory.
10. An intelligent display system, the intelligent display system comprising: a host and a display;
the host comprises image acquisition equipment and a processor, and the display comprises a panel and a Field Programmable Gate Array (FPGA);
the image acquisition equipment is used for shooting human eye images of users;
the processor is used for determining the visual field area of the user on the display according to the photographed human eye image;
the FPGA is used for acquiring a visual field area determined by the processor and used as a light-injection area; determining a first scanning sequence for sequentially scanning each first pixel in the light-injection area when each pixel line in the light-injection area is positioned in the display; determining a first opening sequence of a first switch for controlling each first pixel according to the pixel island to which each first pixel belongs and the first scanning sequence; sequentially starting a first switch in each pixel island according to the first starting sequence; determining a second scanning sequence of second pixels which are not in the light-injection area when all the pixel rows in the pixel groups are scanned simultaneously for each pixel group which is not in the light-injection area in the display in sequence; determining a second opening sequence of a second switch for controlling each second pixel according to the pixel island to which each second pixel belongs and the second scanning sequence; sequentially starting a second switch in each pixel island according to the second starting sequence, wherein each pixel group comprises N adjacent pixel rows, and N is a positive integer greater than 1;
The panel is used for displaying the image to be displayed under the drive of the FPGA.
11. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein a computer program which, when executed by a processor, implements the method steps of any of claims 1-4.
CN202111521857.4A 2021-12-13 2021-12-13 Display driving method and device, electronic equipment and intelligent display system Active CN114217691B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111521857.4A CN114217691B (en) 2021-12-13 2021-12-13 Display driving method and device, electronic equipment and intelligent display system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111521857.4A CN114217691B (en) 2021-12-13 2021-12-13 Display driving method and device, electronic equipment and intelligent display system

Publications (2)

Publication Number Publication Date
CN114217691A CN114217691A (en) 2022-03-22
CN114217691B true CN114217691B (en) 2023-12-26

Family

ID=80701603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111521857.4A Active CN114217691B (en) 2021-12-13 2021-12-13 Display driving method and device, electronic equipment and intelligent display system

Country Status (1)

Country Link
CN (1) CN114217691B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115547264A (en) * 2022-11-07 2022-12-30 北京显芯科技有限公司 Backlight dimming method and system based on human eye tracking

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11212517A (en) * 1997-11-18 1999-08-06 Matsushita Electric Ind Co Ltd Multi-gradational image display device
CN106920501A (en) * 2017-05-12 2017-07-04 京东方科技集团股份有限公司 Display device and its driving method and drive circuit
CN112102172A (en) * 2020-09-21 2020-12-18 京东方科技集团股份有限公司 Image processing method, image processing apparatus, display system, and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017129795A1 (en) * 2017-06-30 2019-01-03 Lg Display Co., Ltd. DISPLAY DEVICE AND GATE-DRIVER CONTROL CIRCUIT THEREOF, CONTROL METHOD AND VIRTUAL-REALITY DEVICE
CN107767808B (en) * 2017-11-13 2020-09-08 北京京东方光电科技有限公司 Display panel driving method, display driving circuit and display device
US11190714B2 (en) * 2018-03-08 2021-11-30 Sony Interactive Entertainment Inc. Electronic device, head-mounted display, gaze point detector, and pixel data readout method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11212517A (en) * 1997-11-18 1999-08-06 Matsushita Electric Ind Co Ltd Multi-gradational image display device
CN106920501A (en) * 2017-05-12 2017-07-04 京东方科技集团股份有限公司 Display device and its driving method and drive circuit
CN112102172A (en) * 2020-09-21 2020-12-18 京东方科技集团股份有限公司 Image processing method, image processing apparatus, display system, and storage medium

Also Published As

Publication number Publication date
CN114217691A (en) 2022-03-22

Similar Documents

Publication Publication Date Title
US20210203904A1 (en) Display Processing Circuitry
CN109783178B (en) Color adjusting method, device, equipment and medium for interface component
CN105096797A (en) Refresh rate dependent adaptive dithering for a variable refresh rate display
US20200311858A1 (en) Video watermark generation method and device, and terminal
CN108076384B (en) image processing method, device, equipment and medium based on virtual reality
EP2945374A2 (en) Positioning of projected augmented reality content
US10812730B2 (en) Sensor auto-configuration
CN114217691B (en) Display driving method and device, electronic equipment and intelligent display system
US20230300475A1 (en) Image processing method and apparatus, and electronic device
JP2006073009A (en) Apparatus and method for histogram stretching
CN115496668A (en) Image processing method, image processing device, electronic equipment and storage medium
CN106341575A (en) Video signal real-time output processing system
US20130286220A1 (en) Test system and method for testing motherboard of camera
US20110221775A1 (en) Method for transforming displaying images
Edstrom et al. Luminance-adaptive smart video storage system
CN105554587B (en) A kind of display control method, device and display device
CN104754367A (en) Multimedia information processing method and device
CN115880193A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112150345A (en) Image processing method and device, video processing method and sending card
CN115150624A (en) Image compression method and circuit system
CN108694707A (en) Image de-noising method, device and equipment
TWI575955B (en) Mechanism for facilitating dynamic phase detection with high jitter tolerance for images of media streams
CN108932448B (en) Electronic screen-based click-to-read code identification method, terminal and click-to-read pen
CN112185312B (en) Image data processing method and device
US20180286006A1 (en) Tile reuse in imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant