CN112102172A - Image processing method, image processing apparatus, display system, and storage medium - Google Patents
Image processing method, image processing apparatus, display system, and storage medium Download PDFInfo
- Publication number
- CN112102172A CN112102172A CN202010996736.4A CN202010996736A CN112102172A CN 112102172 A CN112102172 A CN 112102172A CN 202010996736 A CN202010996736 A CN 202010996736A CN 112102172 A CN112102172 A CN 112102172A
- Authority
- CN
- China
- Prior art keywords
- image information
- image processing
- area
- image
- display panel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000012545 processing Methods 0.000 title claims abstract description 80
- 238000003672 processing method Methods 0.000 title claims abstract description 29
- 230000000007 visual effect Effects 0.000 claims abstract description 72
- 238000000034 method Methods 0.000 claims abstract description 13
- 238000004590 computer program Methods 0.000 claims description 15
- 238000009877 rendering Methods 0.000 claims description 9
- 230000002194 synthesizing effect Effects 0.000 claims description 4
- 238000004891 communication Methods 0.000 claims description 2
- 230000002829 reductive effect Effects 0.000 abstract description 13
- 238000010586 diagram Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 4
- 230000002093 peripheral effect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000005034 decoration Methods 0.000 description 2
- 230000004424 eye movement Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 206010025421 Macule Diseases 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4092—Image resolution transcoding, e.g. by using client-server architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/19—Sensors therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Ophthalmology & Optometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
The application provides an image processing method, an image processing apparatus, a display system and a storage medium. The image processing method comprises the following steps: acquiring visual watching area information of a user; the visual gaze area information includes an area range of a visual gaze area of the user on the display panel; determining a visual watching area and a non-watching area of a user on a display panel according to the visual watching area information; compressing the resolution of the first image information corresponding to the non-gazing area; and sending second image information corresponding to the visual fixation area and the compressed first image information to a second image processing device. The method and the device have the advantages that the overall data volume of the image to be displayed is reduced while the image quality of the visual fixation area is ensured, the operation burden of the GPU can be reduced, and a user can obtain better image quality experience under the existing hardware configuration condition.
Description
Technical Field
The present application relates to the field of display technologies, and in particular, to an image processing method, an image processing apparatus, a display system, and a storage medium.
Background
High resolution is a development trend of various display devices at present, and PPI (Pixels Per inc, the number of Pixels Per Inch) is continuously increasing and has surpassed the retina level no matter 4K television (television with 4K resolution), 8K television (television with 8K resolution) or near-eye display which is popular in recent years.
The electronic competition display product has the same requirements, and the image quality of a game picture is expected to be improved while a high refresh rate is pursued, however, due to the limitation of the operation capability of a GPU (Graphics Processing Unit), when the high-definition display product displays a picture, a high refresh rate and good picture quality cannot be ensured at the same time because of an excessively large data volume, and a user has to reduce the picture quality in order to ensure that a first technical parameter, i.e., the refresh rate, meets requirements; if the operation capacity is improved by adopting a two-way GPU chip processing mode, the product cost is too high.
Disclosure of Invention
The application provides an image processing method, image processing equipment, a display system and a storage medium aiming at the defects of the existing mode, and aims to solve the technical problem that the image quality of a high-definition display product cannot be guaranteed in the prior art.
In a first aspect, an embodiment of the present application provides an image processing method, applied to a first image processing apparatus, the method including:
acquiring visual watching area information of a user; the visual gaze area information includes an area range of a visual gaze area of the user on the display panel;
determining a visual watching area and a non-watching area of a user on a display panel according to the visual watching area information;
compressing the resolution of the first image information corresponding to the non-gazing area;
and sending second image information corresponding to the visual fixation area and the compressed first image information to a second image processing device.
In a second aspect, an embodiment of the present application provides an image processing apparatus, as a first image processing apparatus, including:
a memory;
a processor electrically connected to the memory;
the memory stores a computer program which is executed by the processor to implement the image processing method provided by the first aspect of the embodiments of the present application.
In a third aspect, an embodiment of the present application provides an image processing method applied to a second image processing apparatus, including:
receiving first image information and second image information sent by first image processing equipment;
rendering the first image information and the second image information respectively;
synthesizing the rendered first image information and the rendered second image information;
and sending the synthesized image information to the display panel to enable the display panel to display the synthesized image information.
In a fourth aspect, an embodiment of the present application provides an image processing apparatus, as a second image processing apparatus, including:
a memory;
the graphic operation unit is electrically connected with the memory;
the memory stores a computer program executed by the graphics arithmetic unit to implement the image processing method provided by the third aspect of the embodiment of the present application.
In a fifth aspect, an embodiment of the present application provides a display system, including: a gaze region identification device, a display panel, a first image processing apparatus provided by the second aspect of the embodiments of the present application, and a second image processing apparatus provided by the fourth aspect of the embodiments of the present application;
the watching region identification device is in communication connection with the first image processing equipment, and the display panel is electrically connected with the second image processing equipment;
the gaze region identification means is for: determining an area range of a visual fixation area of a user on a display panel, and transmitting visual fixation area information including the area range to a first image processing device;
the display panel is used for displaying the synthesized image information output by the second image processing device.
In a sixth aspect, an embodiment of the present application provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements: the image processing method provided by the first aspect of the embodiments of the present application, or the image processing method provided by the third aspect of the embodiments of the present application.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
the method and the device can realize smart view, specifically, a visual watching area and a non-watching area of a user on a display panel are determined, the original resolution of image information (namely second image information) of the visual watching area is kept unchanged, and the original resolution is output to the visual watching area; compressing the image information (namely, the first image information) of the non-attention area to reduce the image information of the non-attention area, and outputting the image information of the non-attention area with low resolution; the image quality of the visual attention area is ensured, and meanwhile, the whole data volume of the image to be displayed is reduced by reducing the data volume of the non-attention area, so that the operation burden of a GPU can be reduced, and a user can obtain better image quality experience under the existing hardware configuration condition.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic structural frame diagram of a display system according to an embodiment of the present disclosure;
FIG. 2 is a schematic view of a region and different regions of a physiological structure of the human eye;
fig. 3 is a schematic diagram illustrating the principle of determining a visual fixation area based on eye-related parameters in the embodiment of the present application;
fig. 4 is a schematic area diagram of a visual fixation area and a non-fixation area determined on a screen (display panel) in the embodiment of the present application;
fig. 5 is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating a first area and a second area of a display panel according to an embodiment of the present disclosure;
FIG. 7 is a timing diagram of scanning signals in the second region B1-B3 according to an embodiment of the present invention;
FIG. 8 is a timing diagram of scanning signals in the second region B6-B8 according to an embodiment of the present invention;
FIG. 9 is a timing diagram of a scanning signal of the first region according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the present application, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar parts or parts having the same or similar functions throughout. In addition, if a detailed description of the known art is not necessary for illustrating the features of the present application, it is omitted. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments.
An embodiment of the present application provides a display system, as shown in fig. 1, including: a gaze area recognition arrangement 101, a display panel 102, a first image processing device 103 and a second image processing device 104.
The gaze region identification means 101 is communicatively connected to the first image processing device 103, and the display panel 102 is electrically connected to the second image processing device 104.
The gaze area identification device 101 is configured to: determining an area range of a visual fixation area of a user on a display panel, and transmitting visual fixation area information including the area range to a first image processing device; the display panel is used for displaying the synthesized image information output by the second image processing device.
In an alternative embodiment, the gaze area identification apparatus 101 is specifically configured to: detecting an eye movement track of a user; determining a visual focus point, an angle of view (namely a gazing angle) range, a visual distance and a pupil distance of a user according to the eye movement track of the user; determining the width of a visual fixation area in the display panel according to the visual angle range, the visual distance and the interpupillary distance; and determining a visual watching area of the user on the display panel according to the width and the visual focusing point of the user, and taking other display panel areas except the visual watching area as non-watching areas.
Optionally, when the visual watching region of the user on the display panel is determined according to the width and the visual focusing point of the user, the position of the display panel corresponding to the visual focusing point may be used as a center, a square region with the side length being the width value may be determined as the visual watching region, or a rectangular range with the position of the display panel corresponding to the visual focusing point as the center may be selected in combination with other width parameters.
The user's viewing angle can be determined by referring to the physiological structure of human eyes, and fig. 2 and 3 show the partitions of the physiological structure of human eyes and the corresponding viewing angle ranges of the areas.
Referring to fig. 2, the physiological structure of human eyes is as follows from inside to outside: a foveal (fovea or Central) region (diameter about 1.5mm, visual cell concentration region), a parafoveal (parafovea or Paracentral) region (diameter about 2.5mm, with a small number of rods distributed), a perifoveal (perifovea) region (diameter about 5.5mm), and a peripheral yellow region (macula, diameter about 6-7 mm, clinically dark region).
Referring to fig. 2, the viewing angle ranges of different regions of human eyes are respectively: the viewing angle range for the foveal region is about 5 ° (i.e., 5 degrees), the viewing angle range for the foveal region is about 8 ° (specifically 8 ° 20 'in fig. 2, i.e., 8 degrees 20 minutes), and the viewing angle range for the foveal peripheral region is about 18 ° 20' (i.e., 18 degrees 20 minutes).
The recognition capability of human eyes is gradually reduced from inside to outside, when the visual angle range of a user is determined, the corresponding human eye physiological structure area can be selected according to actual requirements, and the visual angle range of the user is determined based on the area.
In one example, the viewing angle range may be determined to be 18 ° based on the foveal peripheral region, and if the viewing distance is 40cm and the interpupillary distance is 6cm, the width of the visual attention region may be determined to be 6.332cm with reference to the manner shown in fig. 3, a main text region having a side length of 6.332cm and centered on the display panel position corresponding to the visual focus point is cut out on the display panel as the visual attention region (as may be shown with reference to the rectangular box in the middle portion of fig. 4), and the other regions on the display panel are non-attention regions.
The result of dividing the visual fixation area and the non-fixation area is shown in fig. 4, a rectangular frame in the middle of the left display panel is the visual fixation area where human eyes on the right display panel are fixed, a non-fixation area is outside the rectangular frame, H in fig. 4 represents the horizontal direction, and V represents the vertical direction.
The attention area recognition device 101 may be any device having eye-tracking technology (eye-tracking), such as an imaging device or an eye tracker having eye-tracking technology.
Alternatively, the display panel 102 includes: a plurality of pixels of a display area and a driving circuit; the driving circuit is connected with each pixel in the display area through a scanning signal line; the drive circuit is used for: performing line-by-line scanning or group-by-group scanning on each pixel of any region (for example, a first region and a second region in the following) in the display region; each group of pixels scanned during the group-by-group scan includes at least two rows of pixels.
Some optional functions of the display panel 102 in the embodiments of the present application will be described in the following embodiments.
An embodiment of the present application provides an image processing apparatus, as a first image processing apparatus, including: a memory and a processor, the memory being electrically connected to the processor, for example by a bus.
The memory stores a computer program executed by the processor to implement the image processing method provided by the embodiment of the present application (see details described later).
An embodiment of the present application provides an image processing apparatus, as a second image processing apparatus, including: a memory and a GPU, the memory being electrically connected to the GPU, for example by a bus.
The memory stores a computer program executed by the GPU to implement the image processing method provided by the embodiments of the present application (see details described later).
Optionally, in this embodiment of the present application, the first image processing device and the second image processing device may be merged into the same image processing device, a processor and a GPU in the same image processing device share a memory, the processor executes an image processing method of the first image processing device, and the GPU executes an image processing method of the second image processing device.
Those skilled in the art will appreciate that the image processing apparatus provided in the embodiments of the present application may be specially designed and manufactured for the required purposes, or may include known apparatuses in general-purpose computers. These devices have stored therein computer programs that are selectively activated or reconfigured. Such a computer program may be stored in a device (e.g., computer) readable medium or in any type of medium suitable for storing electronic instructions and respectively coupled to a bus.
The Memory in the embodiments of the present application may be a ROM (Read-Only Memory) or other type of static storage device that may store static information and instructions, which may be, but is not limited to, RAM (Random Access Memory) or other type of dynamic storage device that can store information and instructions, EEPROM (Electrically Erasable Programmable Read Only Memory), CD-ROM (Compact Disc Read-Only Memory) or other optical disk storage, optical disk storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
The Processor in the embodiments of the present Application may be a CPU (Central Processing Unit), a general-purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array), or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. A processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, a DSP and a microprocessor, or the like.
The bus in the embodiments of the present application may include a path that carries information between the aforementioned components. The bus may be a PCI (Peripheral Component Interconnect) bus or an EISA (Extended Industry Standard Architecture) bus. The bus may be divided into an address bus, a data bus, a control bus, etc.
Optionally, the image processing apparatus provided in the embodiment of the present application may further include a transceiver. The transceiver may be used for reception and transmission of signals. The transceiver may allow the image processing device to communicate wirelessly or wiredly with other devices to exchange data. It should be noted that the number of the transceivers in practical application is not limited to one.
Optionally, the image processing apparatus provided in the embodiment of the present application may further include an input unit. The input unit may be used to receive input numeric, character, image and/or sound information or to generate key signal inputs related to user settings and function control of the image processing apparatus. The input unit may include, but is not limited to, one or more of a touch screen, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, a camera, a microphone, and the like.
Optionally, the image processing apparatus provided in the embodiment of the present application may further include an output unit. The output unit may be used to output or present information processed by the processor. The output unit may include, but is not limited to, one or more of a display device, a speaker, a vibration device, and the like.
The embodiments of the present application do not require that the image processing apparatus implement or be provided with all of the illustrated devices, and may instead implement or be provided with more or fewer devices.
Based on the same inventive concept, an embodiment of the present application provides an image processing method, as shown in fig. 5, the method includes:
s501, the first image processing device acquires visual attention area information of a user.
The visual attention area information includes an area range of a visual attention area of the user on the display panel, such as a coordinate range of a rectangular frame area shown in fig. 4, which can be determined by the attention area identifying device, and the specific principle of determining the range may refer to the foregoing contents, which are not described herein again.
S502, the first image processing device determines a visual watching area and a non-watching area of the user on the display panel according to the visual watching area information.
S503, the first image processing apparatus compresses the resolution of the first image information corresponding to the non-gazing area.
In an alternative embodiment, the first image information of the non-gaze region is cut out of the complete image information of the image to be displayed, and the resolution of this first image information is compressed. The resolution of the first image information is directly compressed, the resolution of other image information is not required to be compressed, and the workload of compressing the image can be reduced.
In an alternative embodiment, the resolution of the complete image information of the image to be displayed is compressed; the full image information includes first image information. The first image information in the whole image information is compressed integrally to realize the compression of the first image information, the first image information of the non-watching area does not need to be intercepted, and the calculation amount of the intercepted image can be reduced.
Taking a 4K frame as an example, the first image device may compress the first image information in the non-gazing area by 4 times, so that the compressed resolution is 25% of the original 4K resolution (the value is shown in fig. 4), and becomes the Full High Definition (FHD) resolution; for the second image information corresponding to the visual fixation area, the original 4K resolution may be retained, i.e. the resolution of the second image information is 100% of the original 4K resolution (the numerical value is shown in fig. 4).
S504, the first image processing device sends second image information corresponding to the visual fixation area and the compressed first image information to the second image processing device.
In an alternative embodiment, when the second image processing apparatus transmits the compressed first image information, the compressed first image information may be directly transmitted without transmitting other image information.
In another alternative embodiment, sending the compressed first image information to the second image processing apparatus includes: sending the compressed complete image information to a second image processing device; i.e. by transmitting the complete image information comprising the first image information.
Based on the foregoing compression process, the amount of data transfer between the first image processing apparatus and the second image processing apparatus can be greatly reduced.
And S505, the second image processing device receives the first image information and the second image information sent by the first image processing device.
Alternatively, when receiving the first image information, compressed first image information directly transmitted by the first image processing apparatus may be received, or complete image information including the first image information transmitted by the first image processing apparatus may be received.
S506, the second image processing device renders the first image information and the second image information respectively.
In an alternative embodiment, when rendering the first image information, the first image information may be directly rendered without rendering other image information.
In another alternative embodiment, when rendering the first image information, the complete image information including the first image information may be rendered to enable the rendering of the first image information.
Optionally, rendering the first image information includes: the complete image information including the first image information is rendered.
And S507, the second image processing device synthesizes the rendered first image information and the rendered second image information.
In an optional implementation manner, the rendered first image information and the rendered second image information may be directly synthesized to obtain a complete image.
In another alternative embodiment, the rendered second image information and the full image information are combined. The second image information is overlapped with the information of the visual attention area part in the complete image information, and the second image information is still equivalent to the second image information and the first image information which is not overlapped.
In one example, assuming that the display device is a product with a resolution of 15.6 inches at 4K, if the viewing angle is 18 ° and the area ratio of the visual attention region is 6%, the ratio of the data amount of the synthesized image information to the data amount of the original 4K screen is:
(6%*4+1)/4=31%
that is, the data amount of the synthesized image information is reduced to 31% of the data amount of the original 4K screen.
S508, the second image processing apparatus transmits the synthesized image information to the display panel, and causes the display panel to display the synthesized image information.
Optionally, the second image processing apparatus sends the synthesized image information to a driving circuit in the display panel, causes the driving circuit in the display panel to scan pixels of a first region in the display panel according to a scan signal of a first frequency and input data signals of the first image information to the pixels of the first region, scans pixels of a second region in the display panel according to a scan signal of a second frequency and input data signals of second image information to the pixels of the second region; the first frequency is less than the second frequency; the second region includes a visual gaze region, the first region and the second region are non-overlapping.
In one example, as shown in FIG. 6, A is the visual gaze region, B1-B8 are all non-gaze regions, G1、GxAnd Gx+nScanning signal lines of 1 st, x th and x + n th rows, respectively, DxAnd Dx+mThe data signal lines of the x-th column and the x + m-th column respectively, and the specific numerical values of x, m and n are determined as the case may be.
In each region shown in fig. 6, the number of pixel rows of B1, B2, and B3 is the same, all being row 1 to row x-1; the number of pixel rows of B4, A and B5 is the same, and all the rows are the x-th row to the x + n-th row; the number of pixel rows of B6, B7, and B8 is the same, all from row x + n +1 to row 2160 (assuming a total of 2160 rows).
Among the regions shown in fig. 6, the first regions are B4, a and B5, the second regions are B1-B3 and B6-B8, and the driving circuit can scan the signal lines G1To Gx-1Scanning signals of the first frequency shown in FIG. 7 are outputted to scan the pixels in the rows B1-B3, and the signal lines G can be scanned by the scanning signal linesx+n+1To G2160Scanning signals of the first frequency shown in FIG. 8 are outputted to scan the pixels of the rows B6-B8 in the first region B6xTo Gx+nScanning signals of the second frequency as shown in fig. 9 are output to scan the respective rows of pixels of the second areas B4, a and B5.
The scanning signal frequency (namely the first frequency) corresponding to the first area is lower than the scanning signal frequency (namely the second frequency) corresponding to the second area, so that the data writing speed of the second area is lower than the data writing speed of the first area, the refreshing rate of the visual watching area in the second area can be ensured, the user requirements are met, meanwhile, the refreshing rate of the first area is reduced, the operation burden of the GPU is reduced, and the overall performance is improved.
Optionally, causing the driving circuit to scan the pixels of the first area in the display panel according to the scan signal of the first frequency includes:
enabling the driving circuit to carry out group-by-group scanning on a plurality of groups of pixels in the first area according to a scanning signal of a first frequency; each group of pixels comprises at least two rows of pixels.
Referring to the examples shown in fig. 7 and 8, when scanning the pixels of the first area according to the scan signal of the first frequency, the driving circuit may output the scan signal to two rows of pixels in the same time period, then output the scan signal to two rows of pixels in the next time period, and so on until all the rows of pixels in the first area are scanned.
The mode can improve the refresh rate of the second area to a certain extent under the current GPU operation capability so as to improve the whole refresh rate of the picture.
Based on the same inventive concept, embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program implements any one of the image processing methods provided by the embodiments of the present application.
The computer readable medium includes, but is not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magneto-optical disks, ROMs, RAMs, EPROMs (Erasable Programmable Read-Only Memory), EEPROMs, flash Memory, magnetic cards, or fiber optic cards. That is, a readable medium includes any medium that stores or transmits information in a form readable by a device (e.g., a computer).
The embodiment of the present application provides a computer-readable storage medium suitable for any one of the image processing methods, which is not described herein again.
By applying the technical scheme of the embodiment of the application, at least the following beneficial effects can be realized:
1) the method and the device for displaying the smart view can achieve a smart view function, specifically, a visual watching area and a non-watching area of a user on a display panel are determined, original resolution of image information (namely second image information) of the visual watching area is kept unchanged, and original resolution is output to the visual watching area; compressing the image information (namely, the first image information) of the non-attention area to reduce the image information of the non-attention area, and outputting the image information of the non-attention area with low resolution; the image quality of the visual attention area is ensured, and meanwhile, the whole data volume of the image to be displayed is reduced by reducing the data volume of the non-attention area, so that the operation burden of a GPU can be reduced, and a user can obtain better image quality experience under the existing hardware configuration condition.
2) When the first image information is compressed and rendered, the complete image information can be compressed and rendered through the embodiment of the application, so that the first image information can be conveniently and quickly compressed and rendered, the first image information and the second image information can be conveniently and quickly synthesized through synthesis of the complete image information and the second image information, and the complete image is resynthesized so as to be conveniently output and displayed.
3) The embodiment of the application can scan the pixels of the visual watching area by the scanning signals with higher first frequency so as to ensure the data writing speed of the visual watching area and further ensure the refreshing frequency of the visual watching area; for the non-gazing area, the embodiment of the application can scan most of pixels of the non-gazing area by the scanning signal with the lower second frequency, appropriately reduce the refreshing frequency of the non-gazing area on the premise of not influencing the browsing experience of a user, and can reduce the data transmission burden of the conventional GPU.
Those of skill in the art will appreciate that the various operations, methods, steps in the processes, acts, or solutions discussed in this application can be interchanged, modified, combined, or eliminated. Further, other steps, measures, or schemes in various operations, methods, or flows that have been discussed in this application can be alternated, altered, rearranged, broken down, combined, or deleted. Further, steps, measures, schemes in the prior art having various operations, methods, procedures disclosed in the present application may also be alternated, modified, rearranged, decomposed, combined, or deleted.
In the description of the present application, it is to be understood that the terms "first", "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless otherwise specified.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.
Claims (10)
1. An image processing method applied to a first image processing apparatus, the method comprising:
acquiring visual watching area information of a user; the visual gaze area information comprises an area range of a visual gaze area of a user on the display panel;
determining a visual watching area and a non-watching area of the user on the display panel according to the visual watching area information;
compressing the resolution of the first image information corresponding to the non-gazing area;
and sending second image information corresponding to the visual fixation area and the compressed first image information to second image processing equipment.
2. The image processing method of claim 1, the compressing a resolution of the first image information corresponding to the non-gaze region, comprising:
compressing the resolution of the complete image information of the image to be displayed; the full image information comprises the first image information;
and sending the compressed first image information to a second image processing device, including:
and sending the compressed complete image information to the second image processing device.
3. An image processing apparatus, as a first image processing apparatus, characterized by comprising:
a memory;
a processor electrically connected with the memory;
the memory stores a computer program for execution by the processor to implement the image processing method of any of claims 1-2.
4. An image processing method applied to a second image processing apparatus, the method comprising:
receiving first image information and second image information sent by first image processing equipment;
rendering the first image information and the second image information respectively;
synthesizing the rendered first image information and the rendered second image information;
and sending the synthesized image information to a display panel to enable the display panel to display the synthesized image information.
5. The image processing method of claim 4, wherein receiving the first image information sent by the first image device comprises:
receiving complete image information of an image to be displayed, which is sent by first image equipment and comprises the first image information;
and rendering the first image information, including:
rendering the complete image information including the first image information;
and synthesizing the rendered first image information and the rendered second image information, including:
and synthesizing the rendered second image information and the rendered complete image information.
6. The image processing method according to claim 4 or 5, wherein the sending the synthesized image information to a display panel to cause the display panel to display the synthesized image information comprises:
sending the synthesized image information to a driving circuit in the display panel, enabling the driving circuit to scan pixels in a first area in the display panel according to a scanning signal of a first frequency, inputting data signals of the first image information to the pixels in the first area, scanning pixels in a second area in the display panel according to a scanning signal of a second frequency, and inputting data signals of the second image information to the pixels in the second area;
the first frequency is less than the second frequency;
the second region comprises a visual gaze region, the first region and the second region not overlapping.
7. The method according to claim 6, wherein the causing the driving circuit to scan the pixels of the first area in the display panel according to the scan signal of the first frequency comprises:
enabling the driving circuit to carry out group-by-group scanning on the plurality of groups of pixels of the first area according to the scanning signals of the first frequency; each group of pixels comprises at least two rows of pixels.
8. An image processing apparatus, as a second image processing apparatus, characterized by comprising:
a memory;
the graph operation unit is electrically connected with the memory;
the memory stores a computer program executed by the graphics arithmetic unit to implement the image processing method according to any one of claims 4 to 7.
9. A display system, comprising: -a gaze area recognition arrangement, a display panel, a first image processing device according to claim 3 and a second image processing device according to claim 8;
the gazing area recognition device is in communication connection with the first image processing equipment, and the display panel is electrically connected with the second image processing equipment;
the gaze region identification means is for: determining an area range of a visual gaze area of a user on a display panel and transmitting the visual gaze area information including the area range to the first image processing device;
the display panel is used for displaying the synthesized image information output by the second image processing equipment.
10. A computer-readable storage medium, in which a computer program is stored, which computer program, when executed by a processor, implements: the image processing method of any one of claims 1-2, or the image processing method of any one of claims 4-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010996736.4A CN112102172A (en) | 2020-09-21 | 2020-09-21 | Image processing method, image processing apparatus, display system, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010996736.4A CN112102172A (en) | 2020-09-21 | 2020-09-21 | Image processing method, image processing apparatus, display system, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112102172A true CN112102172A (en) | 2020-12-18 |
Family
ID=73754689
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010996736.4A Withdrawn CN112102172A (en) | 2020-09-21 | 2020-09-21 | Image processing method, image processing apparatus, display system, and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112102172A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113242386A (en) * | 2021-05-26 | 2021-08-10 | 北京京东方光电科技有限公司 | Image transmission control device, display device, and image transmission control method |
CN114217691A (en) * | 2021-12-13 | 2022-03-22 | 京东方科技集团股份有限公司 | Display driving method and device, electronic equipment and intelligent display system |
WO2023179510A1 (en) * | 2022-03-25 | 2023-09-28 | 北京字跳网络技术有限公司 | Image compression and transmission method and apparatus, electronic device, and storage medium |
WO2024066661A1 (en) * | 2022-09-27 | 2024-04-04 | 万有引力(宁波)电子科技有限公司 | Image processor, image processing method, storage medium and extended reality display apparatus |
-
2020
- 2020-09-21 CN CN202010996736.4A patent/CN112102172A/en not_active Withdrawn
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113242386A (en) * | 2021-05-26 | 2021-08-10 | 北京京东方光电科技有限公司 | Image transmission control device, display device, and image transmission control method |
CN114217691A (en) * | 2021-12-13 | 2022-03-22 | 京东方科技集团股份有限公司 | Display driving method and device, electronic equipment and intelligent display system |
CN114217691B (en) * | 2021-12-13 | 2023-12-26 | 京东方科技集团股份有限公司 | Display driving method and device, electronic equipment and intelligent display system |
WO2023179510A1 (en) * | 2022-03-25 | 2023-09-28 | 北京字跳网络技术有限公司 | Image compression and transmission method and apparatus, electronic device, and storage medium |
WO2024066661A1 (en) * | 2022-09-27 | 2024-04-04 | 万有引力(宁波)电子科技有限公司 | Image processor, image processing method, storage medium and extended reality display apparatus |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112102172A (en) | Image processing method, image processing apparatus, display system, and storage medium | |
US10417832B2 (en) | Display device supporting configurable resolution regions | |
US20180018931A1 (en) | Splicing display system and display method thereof | |
EP3804347B1 (en) | A method for processing image data with reduced transmission bandwidth for display | |
US11232767B2 (en) | Image display method, display system and computer-readable storage medium | |
WO2018176917A1 (en) | Pixel charging method, circuit, display device and computer storage medium | |
JP7278766B2 (en) | Image processing device, image processing method and program | |
US10621902B2 (en) | Driving circuit for display screen with adjustable color depth bit value, display method and display device | |
CN115842907A (en) | Rendering method, computer product and display device | |
US20200265587A1 (en) | Image processing method, image processing apparatus and display device | |
CN109791431A (en) | Viewpoint rendering | |
US20230267578A1 (en) | Virtual reality display method, device and storage medium | |
JP2011123084A (en) | Image display system and image display program | |
US11917167B2 (en) | Image compression method and apparatus, image display method and apparatus, and medium | |
KR102028997B1 (en) | Head mount display device | |
US11250819B2 (en) | Foveated imaging system | |
CN112887646A (en) | Image processing method and device, augmented reality system, computer device and medium | |
US11983842B2 (en) | Method and system for displaying an ultrasound image in response to screen size | |
CN114217691B (en) | Display driving method and device, electronic equipment and intelligent display system | |
JP6965374B2 (en) | Image processing device and display image generation method | |
CN108881877B (en) | Display processing device, display processing method thereof and display device | |
US10440345B2 (en) | Display control methods and apparatuses | |
CN112150345A (en) | Image processing method and device, video processing method and sending card | |
DE112011105951T5 (en) | Resolution loss reduction for 3D ads | |
CN118037557A (en) | Image data processing method and related equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20201218 |
|
WW01 | Invention patent application withdrawn after publication |