WO2024066659A1 - Image processor, processing method, storage medium and augmented reality display apparatus - Google Patents

Image processor, processing method, storage medium and augmented reality display apparatus Download PDF

Info

Publication number
WO2024066659A1
WO2024066659A1 PCT/CN2023/106699 CN2023106699W WO2024066659A1 WO 2024066659 A1 WO2024066659 A1 WO 2024066659A1 CN 2023106699 W CN2023106699 W CN 2023106699W WO 2024066659 A1 WO2024066659 A1 WO 2024066659A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
display
processing
software
compression
Prior art date
Application number
PCT/CN2023/106699
Other languages
French (fr)
Chinese (zh)
Inventor
贾韬
王超昊
王爽
张睿
Original Assignee
万有引力(宁波)电子科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 万有引力(宁波)电子科技有限公司 filed Critical 万有引力(宁波)电子科技有限公司
Publication of WO2024066659A1 publication Critical patent/WO2024066659A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues

Definitions

  • the present invention relates to an extended reality display technology, and in particular to an image processor, an image processing method, a computer-readable storage medium, and an extended reality display device.
  • Extended Reality (XR) display technology refers to the technology that combines the real and the virtual through computers to create a virtual environment that can be interacted with humans and machines, including but not limited to Augmented Reality (AR) display technology, Virtual Reality (VR) display technology, and Mixed Reality (MR) display technology.
  • AR Augmented Reality
  • VR Virtual Reality
  • MR Mixed Reality
  • Extended Reality display technology can bring the experiencer an immersive feeling of seamless transition between the virtual world and the real world.
  • the display resolution of existing XR devices In response to the high-resolution display requirements in the XR field, the display resolution of existing XR devices generally cannot reach the resolution required for 4K display. When the resolution of the human eye's gaze range is less than 40 pixels per degree, users will clearly see the pixels and their boundaries, which will affect the display effect.
  • the art is in urgent need of an image processing technology that can meet the high-resolution display requirements of XR display devices based on limited software and hardware resources.
  • the present invention provides an image processor, an image processing method, a computer-readable storage medium, and an extended reality display device, which can provide a compression processing function of high-resolution display of a local area according to the user's eye movement signal, and improve the storage, processing and transmission efficiency of data by setting and reusing image processing hardening units, thereby based on limited software and hardware software resources to meet the high-resolution display requirements of XR display devices.
  • the image processor provided according to the first aspect of the present invention includes a display pipeline.
  • the display pipeline integrates a software processing unit and at least one image processing hardening unit, and is configured to: obtain the user's eye movement signal and the first image to be processed; use the first software configured in the software processing unit and the at least one image processing hardening unit to perform distortion correction processing on the first image; according to the eye movement signal, use the second software configured in the software processing unit and the at least one image processing hardening unit to compress the first image; and transmit the second image obtained after the distortion correction processing and the compression processing to the display terminal for image display.
  • the at least one image processing hardening unit is selected from at least one of a cache memory, a weighted summation circuit, a mean value calculation circuit, a filtering circuit, and a pixel position relationship mapping circuit.
  • the display pipeline is connected to a main processor of the augmented reality display device.
  • the step of obtaining the user's eye movement signal and the first image to be processed includes: obtaining a real scene image and/or a virtual image compressed by gaze point rendering via the main processor, wherein the gaze point rendering compression is implemented based on the eye movement signal.
  • the image processor also includes display driver software and/or a firmware computing platform.
  • the display driver software and/or the firmware computing platform are respectively connected to the eye tracker and the display pipeline.
  • the step of obtaining the user's eye movement signal and the first image to be processed includes: obtaining the eye movement signal from the eye tracker via the display driver software and/or the firmware computing platform, and performing eye tracking calculations to determine the gaze point position; updating the gaze point information according to the gaze point position to construct an updated compression model; and transmitting the compression parameters of the updated compression model to the display pipeline.
  • the step of compressing the first image according to the eye movement signal using the second software configured in the software processing unit and the at least one image processing hardening unit includes: determining the coordinates of multiple partitions about the gaze point position and the downsampling ratio of each of the partitions according to the compression parameters of the updated compression model; and performing the compression processing on the first image according to the coordinates and the downsampling ratio of each of the partitions using the second software configured in the software processing unit in cooperation with the at least one image processing hardening unit.
  • the software processing unit is further configured with a third software
  • the display pipeline is further configured to: determine the coordinates of a plurality of partitions about the gaze point position and the upsampling ratio of each of the partitions according to the compression parameters of the updated compression model; and perform super-resolution processing on the first image by using the third software configured in the software processing unit in cooperation with the at least one image processing hardening unit according to the coordinates and the upsampling ratio of each of the partitions.
  • the plurality of partitions have the same and/or different downsampling ratios, and/or the plurality of partitions have the same and/or different upsampling ratios.
  • the display terminal is configured with a decompression module.
  • the display pipeline and the display driver software and/or the firmware computing power platform are respectively connected to the decompression module.
  • the step of transmitting the second image obtained after the distortion correction processing and the compression processing to the display terminal for image display includes: transmitting the second image after the distortion correction processing and the compression processing to the decompression module via the display pipeline; transmitting the eye movement signal, the gaze point position and/or the compression parameters of the updated compression model to the decompression module via the display driver software and/or the firmware computing power platform; using the decompression module, according to the eye movement signal, the gaze point position and/or the compression parameters of the updated compression model, the second image after the distortion correction processing and the compression processing is decompressed to obtain a third image; and according to the third image after the decompression processing, the image is displayed on the display terminal.
  • a decompression module is also integrated in the display pipeline.
  • the display driver software and/or the firmware computing power platform are connected to the decompression module.
  • the software processing unit and the at least one image processing hardening unit are connected to the display terminal via the decompression module.
  • the step of transmitting the second image obtained after the distortion correction processing and the compression processing to the display terminal for image display includes: transmitting the second image after the distortion correction processing and the compression processing to the decompression module via the software processing unit and the at least one image processing hardening unit; transmitting the compression parameters of the updated compression model to the decompression module via the display driver software and/or the firmware computing power platform; using the decompression module, according to the compression parameters of the updated compression model, decompressing the second image after the distortion correction processing and the compression processing to obtain a third image; and transmitting the third image after the decompression processing to the display terminal for image display.
  • the image processing method provided according to the second aspect of the present invention comprises the following steps: acquiring an eye movement signal and a first image to be processed; using a second image processing unit configured in a software processing unit integrated in a display pipeline; a software, and at least one integrated image processing hardening unit, to perform distortion correction processing on the first image; based on the eye movement signal, using the second software configured in the software processing unit, and the at least one image processing hardening unit, to compress the first image; and transmitting the second image that has undergone the distortion correction processing and the compression processing to a display terminal for image display.
  • the computer-readable storage medium provided in the third aspect of the present invention stores computer instructions, which, when executed by a processor, implement the image processing method provided in the second aspect of the present invention.
  • the above-mentioned extended reality display device includes an eye tracker, a main processor, a coprocessor and a display terminal.
  • the eye tracker is used to collect the user's eye movement signals.
  • the main processor outputs a first image that has been or has not been compressed by gaze point rendering, wherein the gaze point rendering compression is implemented based on the eye movement signal.
  • the coprocessor can use the above-mentioned image processor provided by the first aspect of the present invention, wherein the image processor is respectively connected to the eye tracker and the main processor to obtain the first image and the eye movement signal.
  • the display terminal is connected to the image processor to obtain and display the second image that has been processed by the image processor for distortion correction and compression.
  • FIG. 1 shows a schematic diagram of an extended reality display device according to some embodiments of the present invention.
  • FIG. 2 is a schematic flow chart showing an image processing method according to some embodiments of the present invention.
  • FIG3 is a schematic diagram showing a process of recompressing an image according to some embodiments of the present invention.
  • FIG. 4 shows a schematic diagram of local image compression processing provided according to some embodiments of the present invention.
  • the terms “installed”, “connected”, and “connected” should be understood in a broad sense, for example, it can be a fixed connection, a detachable connection, or an integral connection; it can be a mechanical connection or an electrical connection; it can be a direct connection, or it can be indirectly connected through an intermediate medium, or it can be the internal communication of two components.
  • installed should be understood in a broad sense, for example, it can be a fixed connection, a detachable connection, or an integral connection; it can be a mechanical connection or an electrical connection; it can be a direct connection, or it can be indirectly connected through an intermediate medium, or it can be the internal communication of two components.
  • first”, “second”, “third”, etc. may be used herein to describe various components, regions, layers and/or parts, these components, regions, layers and/or parts should not be limited by these terms, and these terms are only used to distinguish different components, regions, layers and/or parts. Therefore, the first component, region, layer and/or part discussed below may be referred to as a second component, region, layer and/or part without departing from some embodiments of the present invention.
  • the display resolution of existing XR devices generally cannot reach the resolution required for 4K display.
  • the resolution of the human eye's gaze range is less than 40 pixels per degree, the user will clearly see the pixels and their boundaries, which will affect the display effect.
  • some improved technologies for multi-screen superposition or multi-image nested output have been proposed in the field.
  • the solution of using mechanical structures to manipulate multiple screens to achieve high-definition gaze point rendering has the defects of high hardware complexity and occupying a large volume.
  • the solution of using multi-image nested transmission for gaze point compression will produce obvious jagged and flickering phenomena at the boundary between high-definition images and low-definition images, which will also affect the display effect.
  • the present invention provides an image processor, an image processing method, a computer-readable storage medium, and an extended reality display device, which can provide compression processing functions for high-resolution display of local areas according to the user's eye movement signals, and improve the storage, processing and transmission efficiency of data by setting and reusing image processing hardening units, thereby meeting the high-resolution display requirements of XR display devices based on limited software and hardware resources.
  • the image processing method provided in the second aspect of the present invention can be implemented via the image processor provided in the first aspect of the present invention.
  • the image processor can be independently configured in the extended reality display device provided in the fourth aspect of the present invention in the form of a coprocessor chip, or can be integrated into the main processors such as the central processing unit (CPU) and the graphics processing unit (GPU) of the extended reality display device provided in the fourth aspect of the present invention in the form of software programs and hardware units.
  • CPU central processing unit
  • GPU graphics processing unit
  • the image processor provided in the first aspect of the present invention may be configured with or connected to a processing unit and a storage unit of a software program.
  • the storage unit includes but is not limited to the computer-readable storage medium provided in the third aspect of the present invention, on which computer instructions are stored.
  • the processing unit is connected to the storage unit and is configured to execute the computer instructions stored in the storage unit to implement the image processing method provided in the second aspect of the present invention.
  • the extended reality display device can adopt a system architecture of a coprocessor.
  • these embodiments of the image processing method are only some non-limiting implementation methods provided by the present invention, which are intended to clearly demonstrate the main concept of the present invention and provide some specific solutions that are convenient for the public to implement, rather than to limit all functions or all working modes of the image processor and the extended reality display device.
  • the image processor and the extended reality display device are only a non-limiting implementation method provided by the present invention, and do not constitute a limitation on the execution subject of each step in these image processing methods.
  • Figure 1 shows a schematic diagram of an extended reality display device provided according to some embodiments of the present invention.
  • Figure 2 shows a flow chart of an image processing method provided according to some embodiments of the present invention.
  • the extended reality display device may be configured with a coprocessor 10, an eye tracker 20, a main processor 30, a display driver chip 40, and a display terminal 50.
  • the coprocessor 10 may select the above-mentioned image processor provided in the first aspect of the present invention, which is configured with a display pipeline.
  • the display pipeline integrates a software processing unit and at least one image processing hardening unit, and is configured to use different software programs and the same image processing hardening unit to perform image processing processes such as image distortion correction and image compression, thereby improving the storage, processing and transmission efficiency of data, and meeting the high-resolution display requirements of the XR display device based on limited software and hardware resources.
  • the eye tracker 20 is used to collect eye movement signals such as the gaze position and gaze direction of the user.
  • the main processor 30 is selected from common image processors such as a central processing unit (CPU) and a graphics processing unit (GPU), and is used to output a first image to be processed by the coprocessor 10.
  • the display driver chip 40 can be integrated in the display terminal 50, and is used to obtain the second image processed by the coprocessor 10, and display the image via the pixel array circuit and OLED/LED pixel array configured in the display terminal 50.
  • the software processing unit configured in the display pipeline of the coprocessor 10 includes but is not limited to a distortion correction unit and/or an image compression unit.
  • the image processing hardening unit configured in the display pipeline can be selected from at least one of a transistor-level cache memory, a weighted summation circuit, a mean value calculation circuit, a filtering circuit, and a pixel position relationship mapping circuit, and is used to cache multiple pixel data in the first image of the current frame and/or several previous historical frames, and/or perform weighted summation, mean value calculation, filtering, and/or pixel position relationship mapping hardening calculations on these pixel data.
  • the display pipeline may first obtain the user's eye movement signal and the first image to be processed.
  • the eye movement signal includes but is not limited to data such as the user's eye deflection angle, gaze position, and gaze direction.
  • the display pipeline may be directly connected to the eye tracker 20 and directly obtain the eye movement signal from the eye tracker 20, or it may be indirectly connected to the eye tracker 20 via an image processing unit such as the GPU of the main processor 30, and the user's eye movement signal may be indirectly obtained synchronously via the main processor 30.
  • the first image may be an original virtual rendering image generated by an image processing unit such as the GPU of the main processor 30.
  • the main processor 30 may also be connected to the eye tracker 20, and be configured to obtain the user's eye movement signal from the eye tracker 20, and first perform gaze point rendering compression on the generated original virtual rendering image according to the eye movement signal, and then send the compressed image after gaze point rendering compression to the coprocessor 10, so as to reduce the data transmission load and data processing load of the entire architecture.
  • the display pipeline can use the image distortion correction unit (i.e., the first software) configured therein and at least one image processing hardening unit integrated therein to perform distortion correction processing on the acquired first image.
  • the image distortion correction unit i.e., the first software
  • the image processing hardening unit integrated therein to perform distortion correction processing on the acquired first image.
  • the image distortion correction unit may first determine the pixel data and processing parameters required for the image distortion correction process.
  • the pixel data may preferably be determined based on the user's eye movement signal.
  • the image distortion correction unit i.e., the first software
  • the image distortion correction unit may obtain the processing parameters from each corresponding memory and store the processing parameters from each corresponding cache.
  • the memory is used to obtain pixel cache data required for image distortion correction processing, and the processing parameters and pixel cache data are sequentially input into one or more of the weighted summation circuit, the mean value calculation circuit, the filtering circuit, and the pixel position relationship mapping circuit, and the pixel cache data are subjected to weighted summation, mean value calculation, filtering, and/or pixel position relationship mapping hardening calculations to obtain corresponding hardening calculation results.
  • the image distortion correction unit i.e., the first software
  • the coprocessor 10 may also be preferably configured with a display driver software and/or firmware computing power platform.
  • the display driver software and/or firmware computing power platform are respectively connected to the eye tracker 20 and the display pipeline.
  • the coprocessor 10 may first obtain the user's eye movement signal from the eye tracker 20 via the display driver software and/or firmware computing power platform, and perform eye tracking calculations to determine the gaze point position.
  • the display driver software and/or firmware computing power platform may update the gaze point information within the system according to the gaze point position to construct an updated compression model, and transmit the compression parameters of the updated compression model to the display pipeline for it to compress the first image.
  • the display pipeline in response to acquiring the first image and the eye movement signal (or the compression parameters associated with the user's eye movement signal), the display pipeline can compress the first image according to the eye movement signal using the image compression unit configured therein (i.e., the second software) and at least one image processing hardening unit integrated therein.
  • the image compression unit configured therein (i.e., the second software) and at least one image processing hardening unit integrated therein.
  • the image compression unit in response to obtaining the compression parameters associated with the user's eye movement signal and the first image to be processed, the image compression unit (i.e., the second software) can first divide the first image into multiple partitions based on the user's gaze point position according to the compression parameters of the updated compression model, and respectively determine the coordinate range of each partition and the downsampling magnification of each non-focused partition away from the gaze point position. Afterwards, the image compression unit (i.e., the second software) can obtain the processing parameters of the downsampling operation from each corresponding memory according to the coordinates of each pixel in each partition, and obtain the pixel cache data required for the downsampling process from each corresponding cache memory.
  • the image compression unit i.e., the second software
  • the image compression unit can sequentially input the obtained processing parameters and the pixel cache data of each pixel into one or more of the weighted summation circuit, the mean calculation circuit, the filtering circuit, and the pixel position relationship mapping circuit, and perform hardening calculations of weighted summation, mean calculation, filtering, and/or pixel position relationship mapping on these pixel cache data to obtain the first hardening calculation result based on the downsampling process.
  • the image compression unit ie, the second software
  • the display pipeline of the coprocessor 10 may preferably be configured with a super-resolution unit (ie, the third software), so as to perform re-compression processing on the first image together with the image compression unit (ie, the second software).
  • a super-resolution unit ie, the third software
  • the super-resolution unit in response to obtaining the compression parameters associated with the user's eye movement signal and the first image to be processed, can first divide the first image into multiple partitions based on the user's gaze point position according to the compression parameters of the updated compression model, and respectively determine the coordinate range of each partition, as well as the upsampling magnification of each focus partition containing the adjacent gaze point position. Afterwards, the super-resolution unit (i.e., the third software) can obtain the processing parameters of the downsampling operation from each corresponding memory according to the coordinates of each pixel in each partition, and obtain the pixel cache data required for super-resolution processing from each corresponding cache memory.
  • the pixel cache data includes but is not limited to the cache data of at least one nearby pixel of the current frame, and the cache data of these nearby pixels in at least one historical frame.
  • the super-resolution unit i.e., the third software
  • the super-resolution unit can sequentially input the acquired processing parameters and the pixel cache data of each pixel into one or more of the weighted summation circuit, the mean value calculation circuit, the filtering circuit, and the pixel position relationship mapping circuit, and perform the weighted summation, mean value calculation, filtering, and/or pixel position relationship mapping hardening calculations on the pixel cache data to obtain a second hardening calculation result based on the super-resolution processing.
  • the super-resolution unit i.e., the third software
  • At least one focus partition in the first image containing a position close to the gaze point is super-resolved based on the same and/or different upsampling ratios, thereby achieving a 4K display equivalent resolution of 40 pixels per degree in these focus partitions.
  • at least one non-focus partition in the first image far from the gaze point is compressed based on the same and/or different downsampling ratios, thereby reducing the data processing load and/or data transmission load of these non-focus partitions.
  • the present invention Compared with the pure software solution of distortion correction and image compression that needs to calculate and cache the pixel data of each relevant pixel frame by frame and partition by partition, the present invention effectively improves the reuse rate of pixel cache data of each frame and each partition by designing and reusing the cache memory, weighted summation circuit, mean value calculation circuit, filtering circuit, and pixel position relationship mapping circuit, and effectively reduces the data processing load of the software unit, thereby reducing the data processing and transmission load of the coprocessor 10 as a whole, so as to facilitate the coprocessor 10 to realize the image processing based on limited software. Hardware resources to meet the high-resolution display requirements of XR display devices.
  • the coprocessor 10 may transmit the second image to the display terminal 50 for high-resolution display of the augmented reality image.
  • a decompression module may be preferably integrated in the display pipeline of the coprocessor 10.
  • the above-mentioned display driver software and/or firmware computing power platform is connected to the decompression module.
  • the software processing unit and at least one image processing hardening unit configured in the display pipeline are also connected to the display terminal 50 via the decompression module.
  • the decompression module configured in the coprocessor 10 can first obtain the second image after distortion correction processing and compression processing through the software processing unit and at least one image processing hardening unit integrated in the display pipeline, and then obtain the compression parameters of the updated compression model through the above-mentioned display driver software and/or firmware computing power platform, so as to decompress the second image after distortion correction processing and compression processing according to the compression parameters of the updated compression model to obtain the third image.
  • the coprocessor 10 can transmit the decompressed third image to the display terminal 50 for high-resolution display of the extended reality image.
  • the above-mentioned decompression module can also be preferably configured on the display terminal 50 or its display driver chip 40.
  • the decompression module can be connected to the display pipeline, display driver software and/or firmware computing platform of the coprocessor 10 respectively.
  • the decompression module configured on the display driver chip 40 can first obtain the second image after distortion correction processing and compression processing through the software processing unit and at least one image processing hardening unit integrated in the display pipeline, and then obtain the user's eye movement signal, gaze point position and/or updated compression model compression parameters through the above-mentioned display driver software and/or firmware computing platform, so as to decompress the second image after distortion correction processing and compression processing according to the eye movement signal, gaze point position and/or updated compression model compression parameters to obtain the third image.
  • the display driver chip 40 can directly drive the pixel array circuit of the display terminal 50 such as the OLED display screen/LED display screen according to the decompressed third image, so as to display the extended reality image with high resolution on the display terminal 50 .
  • the decompression module can be configured in a pixel array circuit of the display terminal 50.
  • the image data transmitted in the entire architecture of the augmented reality display device is The data is compressed by gaze point rendering, which can further reduce the data transmission load and data processing load of the entire architecture.
  • the working principle of the decompression module of the pixel array circuit configured in the display terminal 50 is similar to that of the embodiment configured in the display driver chip 40, which will not be repeated here.
  • the above-mentioned image processor provided in the first aspect of the present invention can also be integrated into the main processor units such as the central processing unit (CPU) and graphics processing unit (GPU) of the above-mentioned extended reality display device provided in the fourth aspect of the present invention in the form of software programs and hardware units to achieve the same technical effects, which will not be repeated here.
  • main processor units such as the central processing unit (CPU) and graphics processing unit (GPU) of the above-mentioned extended reality display device provided in the fourth aspect of the present invention in the form of software programs and hardware units to achieve the same technical effects, which will not be repeated here.
  • the various illustrative logic modules and circuits described in conjunction with the embodiments disclosed herein may be implemented using a general purpose processor, an NPU AI network model computing acceleration processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other programmable logic devices, discrete gates, or chips.
  • the processor may be implemented or executed by a combination of a processor, a processor logic, a discrete hardware component, or any combination thereof designed to perform the functions described herein.
  • a general purpose processor may be a microprocessor, but in an alternative solution, the processor may be any conventional processor, controller, microcontroller, or state machine.
  • the processor may also be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors cooperating with a DSP core, or any other such configuration.
  • the steps of the method or algorithm described in conjunction with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two.
  • the software module may reside in a RAM memory, a flash memory, a ROM memory, an EPROM memory, an EEPROM memory, a register, a hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to a processor so that the processor can read and write information from/to the storage medium.
  • a storage medium may be integrated into a processor.
  • the processor and the storage medium may reside in an ASIC.
  • the ASIC may reside in a user terminal.
  • the processor and the storage medium may reside in a user terminal as discrete components.
  • the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented as a computer program product in software, each function may be stored on or transmitted by a computer-readable medium as one or more instructions or codes.
  • Computer-readable media include both computer storage media and communication media, including any media that facilitates the transfer of a computer program from one place to another. Storage media may be any available media that can be accessed by a computer. As an example and not limitation, such a computer-readable medium may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, disk storage or other magnetic storage device, or any other medium that can be used to carry or store the desired program code in the form of an instruction or data structure and can be accessed by a computer.
  • any connection is also properly referred to as a computer-readable medium.
  • the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwaves
  • the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwaves
  • Disk and disc as used herein include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, wherein disk often reproduces data magnetically, while disc reproduces data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

Provided in the present invention are an image processor, an image processing method, a storage medium and an augmented reality display apparatus. The image processor comprises a display pipeline. A software processing unit and at least one image processing hardening unit are integrated in the display pipeline, which is configured to: acquire an eye movement signal of a user and a first image to be processed; by using a first software configured in the software processing unit, and the at least one image processing hardening unit, perform distortion correction processing on the first image; according to the eye movement signal, perform compression processing on the first image by using a second software configured in the software processing unit, and the at least one image processing hardening unit; and transmit to a display terminal a second image obtained by means of the distortion correction processing and the compression processing, so as to perform image display.

Description

图像处理器、处理方法、存储介质及扩展现实显示装置Image processor, processing method, storage medium and extended reality display device
本申请要求申请日为2022年09月27日、中国申请号为202211184446.5、名为“图像处理器、处理方法、存储介质及扩展现实显示装置”的专利申请的优先权。This application claims priority to a patent application with a filing date of September 27, 2022, Chinese application number 202211184446.5, and titled “Image Processor, Processing Method, Storage Medium and Extended Reality Display Device”.
技术领域Technical Field
本发明涉及扩展现实显示技术,尤其涉及一种图像处理器、一种图像处理方法、一种计算机可读存储介质,以及一种扩展现实显示装置。The present invention relates to an extended reality display technology, and in particular to an image processor, an image processing method, a computer-readable storage medium, and an extended reality display device.
背景技术Background technique
扩展现实(Extended Reality,XR)显示技术,是指通过计算机将真实与虚拟相结合,以打造一个可人机交互的虚拟环境的技术,包括但不限于增强现实(Augmented Reality,AR)显示技术、虚拟现实(Virtual Reality,VR)显示技术、混合现实(Mixed Reality,MR)显示技术。通过将这三种视觉交互技术相融合,扩展现实显示技术能够为体验者带来虚拟世界与现实世界之间无缝转换的沉浸感。Extended Reality (XR) display technology refers to the technology that combines the real and the virtual through computers to create a virtual environment that can be interacted with humans and machines, including but not limited to Augmented Reality (AR) display technology, Virtual Reality (VR) display technology, and Mixed Reality (MR) display technology. By integrating these three visual interaction technologies, Extended Reality display technology can bring the experiencer an immersive feeling of seamless transition between the virtual world and the real world.
针对XR领域的高分辨率显示需求,现有XR设备的显示分辨率一般无法达到4K显示所要求的分辨率。当人眼注视范围的分辨率小于40像素点/度(pixels per degree),用户就会明显看到像素及其分界,从而影响显示效果。In response to the high-resolution display requirements in the XR field, the display resolution of existing XR devices generally cannot reach the resolution required for 4K display. When the resolution of the human eye's gaze range is less than 40 pixels per degree, users will clearly see the pixels and their boundaries, which will affect the display effect.
为了克服现有技术存在的上述缺陷,本领域亟需一种图像处理技术,基于有限的软硬件资源来满足XR显示设备的高分辨率显示需求。In order to overcome the above-mentioned defects of the prior art, the art is in urgent need of an image processing technology that can meet the high-resolution display requirements of XR display devices based on limited software and hardware resources.
发明内容Summary of the invention
以下给出一个或多个方面的简要概述以提供对这些方面的基本理解。此概述不是所有构想到的方面的详尽综览,并且既非旨在指认出所有方面的关键性或决定性要素亦非试图界定任何或所有方面的范围。其唯一的目的是要以简化形式给出一个或多个方面的一些概念以为稍后给出的更加详细的描述之前序。A brief summary of one or more aspects is given below to provide a basic understanding of these aspects. This summary is not an exhaustive overview of all conceived aspects, and is neither intended to identify the key or decisive elements of all aspects nor to define the scope of any or all aspects. Its only purpose is to give some concepts of one or more aspects in a simplified form as a prelude to a more detailed description that will be given later.
为了克服现有技术存在的上述缺陷,本发明提供了一种图像处理器、一种图像处理方法、一种计算机可读存储介质,以及一种扩展现实显示装置,能够根据用户的眼动信号提供局部区域的高分辨率显示的压缩处理功能,并通过设置及复用图像处理硬化单元来提升数据的存储、处理及传输效率,从而基于有限的软硬 件资源来满足XR显示设备的高分辨率显示需求。In order to overcome the above-mentioned defects in the prior art, the present invention provides an image processor, an image processing method, a computer-readable storage medium, and an extended reality display device, which can provide a compression processing function of high-resolution display of a local area according to the user's eye movement signal, and improve the storage, processing and transmission efficiency of data by setting and reusing image processing hardening units, thereby based on limited software and hardware software resources to meet the high-resolution display requirements of XR display devices.
具体来说,根据本发明的第一方面提供的上述图像处理器包括显示管线。所述显示管线中集成有软件处理单元及至少一个图像处理硬化单元,并被配置为:获取用户的眼动信号及待处理的第一图像;采用所述软件处理单元中配置的第一软件,以及所述至少一个图像处理硬化单元,对所述第一图像进行扭曲纠正处理;根据所述眼动信号,采用所述软件处理单元中配置的第二软件,以及所述至少一个图像处理硬化单元,对所述第一图像进行压缩处理;以及将经过所述扭曲纠正处理及所述压缩处理所获得的第二图像传输到显示终端,以进行图像显示。Specifically, the image processor provided according to the first aspect of the present invention includes a display pipeline. The display pipeline integrates a software processing unit and at least one image processing hardening unit, and is configured to: obtain the user's eye movement signal and the first image to be processed; use the first software configured in the software processing unit and the at least one image processing hardening unit to perform distortion correction processing on the first image; according to the eye movement signal, use the second software configured in the software processing unit and the at least one image processing hardening unit to compress the first image; and transmit the second image obtained after the distortion correction processing and the compression processing to the display terminal for image display.
进一步地,在本发明的一些实施例中,所述至少一个图像处理硬化单元选自缓存存储器、加权求和电路、均值计算电路、滤波电路、像素位置关系的映射电路中的至少一者。Furthermore, in some embodiments of the present invention, the at least one image processing hardening unit is selected from at least one of a cache memory, a weighted summation circuit, a mean value calculation circuit, a filtering circuit, and a pixel position relationship mapping circuit.
进一步地,在本发明的一些实施例中,所述显示管线连接扩展现实显示装置的主处理器。所述获取用户的眼动信号及待处理的第一图像的步骤包括:经由所述主处理器获取经过凝视点渲染压缩的实景图像和/或虚拟图像,其中,所述凝视点渲染压缩是基于所述眼动信号来实现。Furthermore, in some embodiments of the present invention, the display pipeline is connected to a main processor of the augmented reality display device. The step of obtaining the user's eye movement signal and the first image to be processed includes: obtaining a real scene image and/or a virtual image compressed by gaze point rendering via the main processor, wherein the gaze point rendering compression is implemented based on the eye movement signal.
进一步地,在本发明的一些实施例中,所述图像处理器还包括显示驱动软件和/或固件算力平台。所述显示驱动软件和/或所述固件算力平台分别连接眼动仪及所述显示管线。所述获取用户的眼动信号及待处理的第一图像的步骤包括:经由所述显示驱动软件和/或所述固件算力平台,从所述眼动仪获取所述眼动信号,并进行眼动追踪计算,以确定凝视点位置;根据所述凝视点位置更新凝视点信息,以构建更新的压缩模型;以及将所述更新的压缩模型的压缩参数传输到所述显示管线。Furthermore, in some embodiments of the present invention, the image processor also includes display driver software and/or a firmware computing platform. The display driver software and/or the firmware computing platform are respectively connected to the eye tracker and the display pipeline. The step of obtaining the user's eye movement signal and the first image to be processed includes: obtaining the eye movement signal from the eye tracker via the display driver software and/or the firmware computing platform, and performing eye tracking calculations to determine the gaze point position; updating the gaze point information according to the gaze point position to construct an updated compression model; and transmitting the compression parameters of the updated compression model to the display pipeline.
进一步地,在本发明的一些实施例中,所述根据所述眼动信号,采用所述软件处理单元中配置的第二软件,以及所述至少一个图像处理硬化单元,对所述第一图像进行压缩处理的步骤包括:根据所述更新的压缩模型的压缩参数,确定关于所述凝视点位置的多个分区的坐标及各所述分区的下采样倍率;以及根据各所述分区的坐标及下采样倍率,采用所述软件处理单元中配置的所述第二软件配合所述至少一个图像处理硬化单元,对所述第一图像进行所述压缩处理。Furthermore, in some embodiments of the present invention, the step of compressing the first image according to the eye movement signal using the second software configured in the software processing unit and the at least one image processing hardening unit includes: determining the coordinates of multiple partitions about the gaze point position and the downsampling ratio of each of the partitions according to the compression parameters of the updated compression model; and performing the compression processing on the first image according to the coordinates and the downsampling ratio of each of the partitions using the second software configured in the software processing unit in cooperation with the at least one image processing hardening unit.
进一步地,在本发明的一些实施例中,所述软件处理单元中还配置有第三软 件。所述显示管线还被配置为:根据所述更新的压缩模型的压缩参数,确定关于所述凝视点位置的多个分区的坐标及各所述分区的上采样倍率;以及根据各所述分区的坐标及上采样倍率,采用所述软件处理单元中配置的所述第三软件配合所述至少一个图像处理硬化单元,对所述第一图像进行超分辨处理。Furthermore, in some embodiments of the present invention, the software processing unit is further configured with a third software The display pipeline is further configured to: determine the coordinates of a plurality of partitions about the gaze point position and the upsampling ratio of each of the partitions according to the compression parameters of the updated compression model; and perform super-resolution processing on the first image by using the third software configured in the software processing unit in cooperation with the at least one image processing hardening unit according to the coordinates and the upsampling ratio of each of the partitions.
进一步地,在本发明的一些实施例中,多个所述分区具有相同和/或不同的下采样倍率,和/或多个所述分区具有相同和/或不同的上采样倍率。Further, in some embodiments of the present invention, the plurality of partitions have the same and/or different downsampling ratios, and/or the plurality of partitions have the same and/or different upsampling ratios.
进一步地,在本发明的一些实施例中,所述显示终端配置有解压缩模块。所述显示管线以及所述显示驱动软件和/或所述固件算力平台分别连接所述解压缩模块。所述将经过所述扭曲纠正处理及所述压缩处理的所获得的第二图像传输到显示终端,以进行图像显示的步骤包括:经由所述显示管线,将经过所述扭曲纠正处理及所述压缩处理的第二图像传输到所述解压缩模块;经由所述显示驱动软件和/或所述固件算力平台,将所述眼动信号、所述凝视点位置和/或所述更新的压缩模型的压缩参数传输到所述解压缩模块;采用所述解压缩模块,根据所述眼动信号、所述凝视点位置和/或所述更新的压缩模型的压缩参数,对经过所述扭曲纠正处理及所述压缩处理的第二图像进行解压缩处理,以获得第三图像;以及根据经过所述解压缩处理的第三图像,在所述显示终端进行图像显示。Further, in some embodiments of the present invention, the display terminal is configured with a decompression module. The display pipeline and the display driver software and/or the firmware computing power platform are respectively connected to the decompression module. The step of transmitting the second image obtained after the distortion correction processing and the compression processing to the display terminal for image display includes: transmitting the second image after the distortion correction processing and the compression processing to the decompression module via the display pipeline; transmitting the eye movement signal, the gaze point position and/or the compression parameters of the updated compression model to the decompression module via the display driver software and/or the firmware computing power platform; using the decompression module, according to the eye movement signal, the gaze point position and/or the compression parameters of the updated compression model, the second image after the distortion correction processing and the compression processing is decompressed to obtain a third image; and according to the third image after the decompression processing, the image is displayed on the display terminal.
进一步地,在本发明的一些实施例中,所述显示管线中还集成有解压缩模块。所述显示驱动软件和/或所述固件算力平台连接所述解压缩模块。所述软件处理单元及所述至少一个图像处理硬化单元经由所述解压缩模块连接所述显示终端。所述将经过所述扭曲纠正处理及所述压缩处理所获得的第二图像传输到显示终端,以进行图像显示的步骤包括:经由所述软件处理单元及所述至少一个图像处理硬化单元,将经过所述扭曲纠正处理及所述压缩处理的第二图像传输到所述解压缩模块;经由所述显示驱动软件和/或所述固件算力平台,将所述更新的压缩模型的压缩参数传输到所述解压缩模块;采用所述解压缩模块,根据所述更新的压缩模型的压缩参数,对经过所述扭曲纠正处理及所述压缩处理的第二图像进行解压缩处理,以获得第三图像;以及将经过所述解压缩处理的第三图像传输到所述显示终端,以进行所述图像显示。Furthermore, in some embodiments of the present invention, a decompression module is also integrated in the display pipeline. The display driver software and/or the firmware computing power platform are connected to the decompression module. The software processing unit and the at least one image processing hardening unit are connected to the display terminal via the decompression module. The step of transmitting the second image obtained after the distortion correction processing and the compression processing to the display terminal for image display includes: transmitting the second image after the distortion correction processing and the compression processing to the decompression module via the software processing unit and the at least one image processing hardening unit; transmitting the compression parameters of the updated compression model to the decompression module via the display driver software and/or the firmware computing power platform; using the decompression module, according to the compression parameters of the updated compression model, decompressing the second image after the distortion correction processing and the compression processing to obtain a third image; and transmitting the third image after the decompression processing to the display terminal for image display.
此外,根据本发明的第二方面提供的上述图像处理方法包括以下步骤:获取眼动信号及待处理的第一图像;采用显示管线中集成的软件处理单元中配置的第 一软件,以及集成的至少一个图像处理硬化单元,对所述第一图像进行扭曲纠正处理;根据所述眼动信号,采用所述软件处理单元中配置的第二软件,以及所述至少一个图像处理硬化单元,对所述第一图像进行压缩处理;以及将经过所述扭曲纠正处理及所述压缩处理的第二图像传输到显示终端,以进行图像显示。In addition, the image processing method provided according to the second aspect of the present invention comprises the following steps: acquiring an eye movement signal and a first image to be processed; using a second image processing unit configured in a software processing unit integrated in a display pipeline; a software, and at least one integrated image processing hardening unit, to perform distortion correction processing on the first image; based on the eye movement signal, using the second software configured in the software processing unit, and the at least one image processing hardening unit, to compress the first image; and transmitting the second image that has undergone the distortion correction processing and the compression processing to a display terminal for image display.
此外,根据本发明的第三方面提供的上述计算机可读存储介质,其上存储有计算机指令。所述计算机指令被处理器执行时,实施本发明的第二方面提供的上述图像处理方法。In addition, the computer-readable storage medium provided in the third aspect of the present invention stores computer instructions, which, when executed by a processor, implement the image processing method provided in the second aspect of the present invention.
此外,根据本发明的第四方面提供的上述扩展现实显示装置包括眼动仪、主处理器、协处理器及显示终端。所述眼动仪用于采集用户的眼动信号。所述主处理器输出经过或未经过凝视点渲染压缩的第一图像,其中,所述凝视点渲染压缩是基于所述眼动信号来实现。所述协处理器可以选用本发明的第一方面提供的上述图像处理器,其中,所述图像处理器分别连接所述眼动仪及所述主处理器,以获取所述第一图像及所述眼动信号。所述显示终端连接所述图像处理器,以获取并显示经过所述图像处理器的扭曲纠正处理及压缩处理的第二图像。In addition, the above-mentioned extended reality display device provided according to the fourth aspect of the present invention includes an eye tracker, a main processor, a coprocessor and a display terminal. The eye tracker is used to collect the user's eye movement signals. The main processor outputs a first image that has been or has not been compressed by gaze point rendering, wherein the gaze point rendering compression is implemented based on the eye movement signal. The coprocessor can use the above-mentioned image processor provided by the first aspect of the present invention, wherein the image processor is respectively connected to the eye tracker and the main processor to obtain the first image and the eye movement signal. The display terminal is connected to the image processor to obtain and display the second image that has been processed by the image processor for distortion correction and compression.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
在结合以下附图阅读本公开的实施例的详细描述之后,能够更好地理解本发明的上述特征和优点。在附图中,各组件不一定是按比例绘制,并且具有类似的相关特性或特征的组件可能具有相同或相近的附图标记。The above features and advantages of the present invention can be better understood after reading the detailed description of the embodiments of the present disclosure in conjunction with the following drawings. In the drawings, the components are not necessarily drawn to scale, and components with similar related properties or features may have the same or similar reference numerals.
图1示出了根据本发明的一些实施例提供的扩展现实显示装置的示意图。FIG. 1 shows a schematic diagram of an extended reality display device according to some embodiments of the present invention.
图2示出了根据本发明的一些实施例提供的图像处理方法的流程示意图。FIG. 2 is a schematic flow chart showing an image processing method according to some embodiments of the present invention.
图3示出了根据本发明的一些实施例提供的重压缩图像的流程示意图。FIG3 is a schematic diagram showing a process of recompressing an image according to some embodiments of the present invention.
图4示出了根据本发明的一些实施例提供的图像局部压缩处理的示意图。FIG. 4 shows a schematic diagram of local image compression processing provided according to some embodiments of the present invention.
具体实施方式Detailed ways
以下由特定的具体实施例说明本发明的实施方式,本领域技术人员可由本说明书所揭示的内容轻易地了解本发明的其他优点及功效。虽然本发明的描述将结合优选实施例一起介绍,但这并不代表此发明的特征仅限于该实施方式。恰恰相反,结合实施方式作发明介绍的目的是为了覆盖基于本发明的权利要求而有可能延伸出的其它选择或改造。为了提供对本发明的深度了解,以下描述中将包含许 多具体的细节。本发明也可以不使用这些细节实施。此外,为了避免混乱或模糊本发明的重点,有些具体细节将在描述中被省略。The following specific embodiments illustrate the implementation of the present invention. Those skilled in the art can easily understand other advantages and effects of the present invention from the contents disclosed in this specification. Although the description of the present invention will be introduced in conjunction with the preferred embodiment, this does not mean that the features of this invention are limited to this embodiment. On the contrary, the purpose of introducing the invention in conjunction with the embodiment is to cover other options or modifications that may be extended based on the claims of the present invention. In order to provide a deep understanding of the present invention, the following description will include many The present invention may also be implemented without using these details. In addition, in order to avoid confusion or blurring the key points of the present invention, some specific details will be omitted in the description.
在本发明的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通。对于本领域的普通技术人员而言,可以具体情况理解上述术语在本发明中的具体含义。In the description of the present invention, it should be noted that, unless otherwise clearly specified and limited, the terms "installed", "connected", and "connected" should be understood in a broad sense, for example, it can be a fixed connection, a detachable connection, or an integral connection; it can be a mechanical connection or an electrical connection; it can be a direct connection, or it can be indirectly connected through an intermediate medium, or it can be the internal communication of two components. For ordinary technicians in this field, the specific meanings of the above terms in the present invention can be understood according to specific circumstances.
另外,在以下的说明中所使用的“上”、“下”、“左”、“右”、“顶”、“底”、“水平”、“垂直”应被理解为该段以及相关附图中所绘示的方位。此相对性的用语仅是为了方便说明之用,其并不代表其所叙述的装置需以特定方位来制造或运作,因此不应理解为对本发明的限制。In addition, the terms "upper", "lower", "left", "right", "top", "bottom", "horizontal" and "vertical" used in the following description should be understood as the directions shown in the paragraph and the related drawings. Such relative terms are only used for the convenience of description and do not mean that the device described therein must be manufactured or operated in a specific direction, and therefore should not be understood as limiting the present invention.
能理解的是,虽然在此可使用用语“第一”、“第二”、“第三”等来叙述各种组件、区域、层和/或部分,这些组件、区域、层和/或部分不应被这些用语限定,且这些用语仅是用来区别不同的组件、区域、层和/或部分。因此,以下讨论的第一组件、区域、层和/或部分可在不偏离本发明一些实施例的情况下被称为第二组件、区域、层和/或部分。It is understood that although the terms "first", "second", "third", etc. may be used herein to describe various components, regions, layers and/or parts, these components, regions, layers and/or parts should not be limited by these terms, and these terms are only used to distinguish different components, regions, layers and/or parts. Therefore, the first component, region, layer and/or part discussed below may be referred to as a second component, region, layer and/or part without departing from some embodiments of the present invention.
如上所述,针对扩展现实(Extended Reality,XR)领域的高分辨率显示需求,现有XR设备的显示分辨率一般无法达到4K显示所要求的分辨率。当人眼注视范围的分辨率小于40像素点/度(pixels per degree),用户就会明显看到像素及其分界,从而影响显示效果。针对该提高显示分辨率的需求,本领域提出了一些多屏幕叠加或多图像嵌套输出的改进技术。然而,基于这些改进技术,该使用机械结构操纵多个屏幕来实现高清凝视点渲染的方案,存在硬件复杂度过高,且会占据较大体积的缺陷。该使用多图像嵌套传输进行凝视点压缩的方案,会在高清图像与低清图像的边界产生明显的锯齿与闪烁现象,也会影响显示效果。As mentioned above, in response to the high-resolution display requirements in the field of Extended Reality (XR), the display resolution of existing XR devices generally cannot reach the resolution required for 4K display. When the resolution of the human eye's gaze range is less than 40 pixels per degree, the user will clearly see the pixels and their boundaries, which will affect the display effect. In response to the demand for improving display resolution, some improved technologies for multi-screen superposition or multi-image nested output have been proposed in the field. However, based on these improved technologies, the solution of using mechanical structures to manipulate multiple screens to achieve high-definition gaze point rendering has the defects of high hardware complexity and occupying a large volume. The solution of using multi-image nested transmission for gaze point compression will produce obvious jagged and flickering phenomena at the boundary between high-definition images and low-definition images, which will also affect the display effect.
为了克服现有技术存在的上述缺陷,本发明提供了一种图像处理器、一种图像处理方法、一种计算机可读存储介质,以及一种扩展现实显示装置,能够根据用户的眼动信号提供局部区域的高分辨率显示的压缩处理功能,并通过设置及复用图像处理硬化单元来提升数据的存储、处理及传输效率,从而基于有限的软硬件资源来满足XR显示设备的高分辨率显示需求。 In order to overcome the above-mentioned defects of the prior art, the present invention provides an image processor, an image processing method, a computer-readable storage medium, and an extended reality display device, which can provide compression processing functions for high-resolution display of local areas according to the user's eye movement signals, and improve the storage, processing and transmission efficiency of data by setting and reusing image processing hardening units, thereby meeting the high-resolution display requirements of XR display devices based on limited software and hardware resources.
在一些非限制性的实施例中,本发明的第二方面提供的上述图像处理方法,可以经由本发明的第一方面提供的上述图像处理器来实施。具体来说,该图像处理器可以经由协处理器的芯片形式,被独立配置于本发明的第四方面提供的上述扩展现实显示装置,也可以经由软件程序和硬件单元的形式,被集成于本发明的第四方面提供的上述扩展现实显示装置的中央处理单元(Central Processing Unit,CPU)、图形处理单元(Graphics Processing Unit,GPU)等主处理器中。In some non-limiting embodiments, the image processing method provided in the second aspect of the present invention can be implemented via the image processor provided in the first aspect of the present invention. Specifically, the image processor can be independently configured in the extended reality display device provided in the fourth aspect of the present invention in the form of a coprocessor chip, or can be integrated into the main processors such as the central processing unit (CPU) and the graphics processing unit (GPU) of the extended reality display device provided in the fourth aspect of the present invention in the form of software programs and hardware units.
进一步地,本发明的第一方面提供的上述图像处理器中可以配置或连接软件程序的处理单元及存储单元。该存储单元包括但不限于本发明的第三方面提供的上述计算机可读存储介质,其上存储有计算机指令。该处理单元连接该存储单元,并被配置用于执行该存储单元上存储的计算机指令,以实施本发明的第二方面提供的上述图像处理方法。Furthermore, the image processor provided in the first aspect of the present invention may be configured with or connected to a processing unit and a storage unit of a software program. The storage unit includes but is not limited to the computer-readable storage medium provided in the third aspect of the present invention, on which computer instructions are stored. The processing unit is connected to the storage unit and is configured to execute the computer instructions stored in the storage unit to implement the image processing method provided in the second aspect of the present invention.
以下将结合一些图像处理方法的实施例来描述上述图像处理器及扩展现实显示装置的工作原理。在一些非限制性的实施例中,该扩展现实显示装置可以采用协处理器的系统架构。本领域的技术人员可以理解,这些图像处理方法的实施例只是本发明提供的一些非限制性的实施方式,旨在清楚地展示本发明的主要构思,并提供一些便于公众实施的具体方案,而非用于限制该图像处理器及扩展现实显示装置的全部功能或全部工作方式。同样地,该图像处理器及扩展现实显示装置也只是本发明提供的一种非限制性的实施方式,不对这些图像处理方法中的各步骤的执行主体构成限制。The following will describe the working principles of the above-mentioned image processor and extended reality display device in conjunction with some embodiments of the image processing method. In some non-limiting embodiments, the extended reality display device can adopt a system architecture of a coprocessor. Those skilled in the art will understand that these embodiments of the image processing method are only some non-limiting implementation methods provided by the present invention, which are intended to clearly demonstrate the main concept of the present invention and provide some specific solutions that are convenient for the public to implement, rather than to limit all functions or all working modes of the image processor and the extended reality display device. Similarly, the image processor and the extended reality display device are only a non-limiting implementation method provided by the present invention, and do not constitute a limitation on the execution subject of each step in these image processing methods.
请结合参考图1及图2。图1示出了根据本发明的一些实施例提供的扩展现实显示装置的示意图。图2示出了根据本发明的一些实施例提供的图像处理方法的流程示意图。Please refer to Figure 1 and Figure 2. Figure 1 shows a schematic diagram of an extended reality display device provided according to some embodiments of the present invention. Figure 2 shows a flow chart of an image processing method provided according to some embodiments of the present invention.
如图1所示,在本发明的一些实施例中,扩展现实显示装置中可以配置有协处理器10、眼动仪20、主处理器30、显示驱动芯片40,以及显示终端50。进一步地,该协处理器10可以选用本发明的第一方面提供的上述图像处理器,其中配置有显示管线(Display Pipeline)。该显示管线中集成有软件处理单元及至少一个图像处理硬化单元,并被配置用于采用不同的软件程序以及相同的图像处理硬化单元,进行图像扭曲纠正、图像压缩等图像处理流程,从而提升数据的存储、处理及传输效率,并基于有限的软硬件资源来满足XR显示设备的高分辨率显示需求。 该眼动仪20用于采集用户的凝视位置、凝视方向等眼动信号。该主处理器30选自中央处理单元(CPU)、图形处理单元(GPU)等常见的图像处理器,用于输出待协处理器10处理的第一图像。该显示驱动芯片40可以集成于显示终端50,用于获取经过协处理器10处理的第二图像,并经由配置于显示终端50的像素阵列电路及OLED/LED像素阵列来进行图像显示。As shown in FIG1 , in some embodiments of the present invention, the extended reality display device may be configured with a coprocessor 10, an eye tracker 20, a main processor 30, a display driver chip 40, and a display terminal 50. Furthermore, the coprocessor 10 may select the above-mentioned image processor provided in the first aspect of the present invention, which is configured with a display pipeline. The display pipeline integrates a software processing unit and at least one image processing hardening unit, and is configured to use different software programs and the same image processing hardening unit to perform image processing processes such as image distortion correction and image compression, thereby improving the storage, processing and transmission efficiency of data, and meeting the high-resolution display requirements of the XR display device based on limited software and hardware resources. The eye tracker 20 is used to collect eye movement signals such as the gaze position and gaze direction of the user. The main processor 30 is selected from common image processors such as a central processing unit (CPU) and a graphics processing unit (GPU), and is used to output a first image to be processed by the coprocessor 10. The display driver chip 40 can be integrated in the display terminal 50, and is used to obtain the second image processed by the coprocessor 10, and display the image via the pixel array circuit and OLED/LED pixel array configured in the display terminal 50.
具体来说,协处理器10的显示管线中配置的软件处理单元包括但不限于扭曲纠正单元和/或图像压缩单元。该显示管线中配置的图像处理硬化单元可以选自晶体管级别的缓存存储器、加权求和电路、均值计算电路、滤波电路、像素位置关系的映射电路中的至少一者,用于缓存当前帧和/或之前若干历史帧的第一图像中的多个像素数据,和/或对这些像素数据进行加权求和、均值计算、滤波和/或像素位置关系映射的硬化计算。Specifically, the software processing unit configured in the display pipeline of the coprocessor 10 includes but is not limited to a distortion correction unit and/or an image compression unit. The image processing hardening unit configured in the display pipeline can be selected from at least one of a transistor-level cache memory, a weighted summation circuit, a mean value calculation circuit, a filtering circuit, and a pixel position relationship mapping circuit, and is used to cache multiple pixel data in the first image of the current frame and/or several previous historical frames, and/or perform weighted summation, mean value calculation, filtering, and/or pixel position relationship mapping hardening calculations on these pixel data.
如图2所示,在进行图像处理的过程中,显示管线可以首先获取用户的眼动信号及待处理的第一图像。在此,该眼动信号包括但不限于用户的眼球偏转角度、凝视位置、凝视方向等数据。显示管线可以直接连接眼动仪20并直接从眼动仪20获取该眼动信号,也可以经由主处理器30的GPU等图像处理单元间接连接眼动仪20,并经由主处理器30间接地同步获取用户的眼动信号。此外,针对混合现实(Mixed Reality,MR)显示的实施例,该第一图像可以是由主处理器30的GPU等图像处理单元生成的原始的虚拟渲染图像。As shown in FIG2 , during the image processing, the display pipeline may first obtain the user's eye movement signal and the first image to be processed. Here, the eye movement signal includes but is not limited to data such as the user's eye deflection angle, gaze position, and gaze direction. The display pipeline may be directly connected to the eye tracker 20 and directly obtain the eye movement signal from the eye tracker 20, or it may be indirectly connected to the eye tracker 20 via an image processing unit such as the GPU of the main processor 30, and the user's eye movement signal may be indirectly obtained synchronously via the main processor 30. In addition, for an embodiment of a mixed reality (MR) display, the first image may be an original virtual rendering image generated by an image processing unit such as the GPU of the main processor 30.
进一步地,在一些优选地实施例中,主处理器30也可以连接眼动仪20,被配置用于从眼动仪20获取用户的眼动信号,并先根据该眼动信号对生成的原始虚拟渲染图像进行凝视点渲染压缩,再将凝视点渲染压缩后的压缩图像发送给协处理器10,以降低整个构架的数据传输负荷及数据处理负荷。Furthermore, in some preferred embodiments, the main processor 30 may also be connected to the eye tracker 20, and be configured to obtain the user's eye movement signal from the eye tracker 20, and first perform gaze point rendering compression on the generated original virtual rendering image according to the eye movement signal, and then send the compressed image after gaze point rendering compression to the coprocessor 10, so as to reduce the data transmission load and data processing load of the entire architecture.
如图1及图2所示,在获取第一图像及用户的眼动信号之后,显示管线可以采用其中配置的图像扭曲纠正单元(即第一软件),以及其中集成的至少一个图像处理硬化单元,对获取的第一图像进行扭曲纠正处理。As shown in Figures 1 and 2, after acquiring the first image and the user's eye movement signal, the display pipeline can use the image distortion correction unit (i.e., the first software) configured therein and at least one image processing hardening unit integrated therein to perform distortion correction processing on the acquired first image.
具体来说,响应于获取到待处理的第一图像,图像扭曲纠正单元(即第一软件)可以首先确定图像扭曲纠正处理需要用到的像素数据及处理参数。在一些实施例中,这些像素数据可以优选地根据用户的眼动信号来确定。之后,图像扭曲纠正单元(即第一软件)可以从各对应的存储器获取处理参数,并从各对应的缓 存存储器,获取图像扭曲纠正处理需要用到的像素缓存数据,再将该处理参数及像素缓存数据依次输入上述加权求和电路、均值计算电路、滤波电路、像素位置关系的映射电路中的一者或多者,对这些像素缓存数据进行加权求和、求均值、滤波和/或像素位置关系映射的硬化计算,以得到对应的硬化计算结果。再之后,图像扭曲纠正单元(即第一软件)即可通过数据排列、赋值等软件操作,得到经过扭曲纠正处理的图像。Specifically, in response to acquiring the first image to be processed, the image distortion correction unit (i.e., the first software) may first determine the pixel data and processing parameters required for the image distortion correction process. In some embodiments, the pixel data may preferably be determined based on the user's eye movement signal. Thereafter, the image distortion correction unit (i.e., the first software) may obtain the processing parameters from each corresponding memory and store the processing parameters from each corresponding cache. The memory is used to obtain pixel cache data required for image distortion correction processing, and the processing parameters and pixel cache data are sequentially input into one or more of the weighted summation circuit, the mean value calculation circuit, the filtering circuit, and the pixel position relationship mapping circuit, and the pixel cache data are subjected to weighted summation, mean value calculation, filtering, and/or pixel position relationship mapping hardening calculations to obtain corresponding hardening calculation results. After that, the image distortion correction unit (i.e., the first software) can obtain the image after distortion correction processing through software operations such as data arrangement and value assignment.
进一步地,如图1所示的一些实施例中,协处理器10还可以优选地配置有显示驱动软件和/或固件算力平台。该显示驱动软件和/或固件算力平台分别连接眼动仪20及显示管线。在对第一图像进行压缩的过程中,协处理器10可以首先经由该显示驱动软件和/或固件算力平台,从眼动仪20获取用户的眼动信号,并进行眼动追踪计算以确定凝视点位置。之后,显示驱动软件和/或固件算力平台可以根据该凝视点位置更新系统内部的凝视点信息,以构建更新的压缩模型,并将更新的压缩模型的压缩参数传输到显示管线,以供其对第一图像进行压缩。Furthermore, in some embodiments as shown in FIG. 1 , the coprocessor 10 may also be preferably configured with a display driver software and/or firmware computing power platform. The display driver software and/or firmware computing power platform are respectively connected to the eye tracker 20 and the display pipeline. In the process of compressing the first image, the coprocessor 10 may first obtain the user's eye movement signal from the eye tracker 20 via the display driver software and/or firmware computing power platform, and perform eye tracking calculations to determine the gaze point position. Afterwards, the display driver software and/or firmware computing power platform may update the gaze point information within the system according to the gaze point position to construct an updated compression model, and transmit the compression parameters of the updated compression model to the display pipeline for it to compress the first image.
如图2所示,响应于获取到第一图像以及眼动信号(或者关联于用户眼动信号的压缩参数),显示管线可以根据该眼动信号,采用其中配置的图像压缩单元(即第二软件),以及其中集成的至少一个图像处理硬化单元,对该第一图像进行压缩处理。As shown in Figure 2, in response to acquiring the first image and the eye movement signal (or the compression parameters associated with the user's eye movement signal), the display pipeline can compress the first image according to the eye movement signal using the image compression unit configured therein (i.e., the second software) and at least one image processing hardening unit integrated therein.
具体来说,响应于获取到上述关联于用户眼动信号的压缩参数及待处理的第一图像,图像压缩单元(即第二软件)可以首先根据更新的压缩模型的压缩参数,将第一图像基于用户的凝视点位置划分为多个分区,并分别确定各分区的坐标范围以及远离凝视点位置的各非关注分区的下采样倍率。之后,图像压缩单元(即第二软件)可以根据各分区中各像素的坐标,从各对应的存储器获取下采样操作的处理参数,并从各对应的缓存存储器,获取下采样处理需要用到的像素缓存数据。再之后,图像压缩单元(即第二软件)可以将获取的各处理参数及各像素的像素缓存数据依次输入上述加权求和电路、均值计算电路、滤波电路、像素位置关系的映射电路中的一者或多者,对这些像素缓存数据进行加权求和、求均值、滤波和/或像素位置关系映射的硬化计算,以得到基于下采样处理的第一硬化计算结果。再之后,图像压缩单元(即第二软件)即可通过数据排列、赋值等软件操作,得到经过下采样压缩的图像。 Specifically, in response to obtaining the compression parameters associated with the user's eye movement signal and the first image to be processed, the image compression unit (i.e., the second software) can first divide the first image into multiple partitions based on the user's gaze point position according to the compression parameters of the updated compression model, and respectively determine the coordinate range of each partition and the downsampling magnification of each non-focused partition away from the gaze point position. Afterwards, the image compression unit (i.e., the second software) can obtain the processing parameters of the downsampling operation from each corresponding memory according to the coordinates of each pixel in each partition, and obtain the pixel cache data required for the downsampling process from each corresponding cache memory. Afterwards, the image compression unit (i.e., the second software) can sequentially input the obtained processing parameters and the pixel cache data of each pixel into one or more of the weighted summation circuit, the mean calculation circuit, the filtering circuit, and the pixel position relationship mapping circuit, and perform hardening calculations of weighted summation, mean calculation, filtering, and/or pixel position relationship mapping on these pixel cache data to obtain the first hardening calculation result based on the downsampling process. Afterwards, the image compression unit (ie, the second software) can obtain the down-sampled compressed image through software operations such as data arrangement and value assignment.
此外,在本发明的一些实施例中,协处理器10的显示管线中还可以优选地配置有超分辨单元(即第三软件),从而和图像压缩单元(即第二软件)一起对第一图像进行重压缩处理。In addition, in some embodiments of the present invention, the display pipeline of the coprocessor 10 may preferably be configured with a super-resolution unit (ie, the third software), so as to perform re-compression processing on the first image together with the image compression unit (ie, the second software).
具体来说,在对第一图像进行重压缩处理的过程中,响应于获取到上述关联于用户眼动信号的压缩参数及待处理的第一图像,超分辨单元(即第三软件)可以首先根据更新的压缩模型的压缩参数,将第一图像基于用户的凝视点位置划分为多个分区,并分别确定各分区的坐标范围,以及包含临近凝视点位置的各关注分区的上采样倍率。之后,超分辨单元(即第三软件)可以根据各分区中各像素的坐标,从各对应的存储器获取下采样操作的处理参数,并从各对应的缓存存储器,获取超分辨处理需要用到的像素缓存数据。在此,该像素缓存数据包括但不限于当前帧的至少一个附近像素的缓存数据,以及这些附近像素之前的在至少一帧历史帧中的缓存数据。再之后,超分辨单元(即第三软件)可以将获取的各处理参数及各像素的像素缓存数据依次输入上述加权求和电路、均值计算电路、滤波电路、像素位置关系的映射电路中的一者或多者,对这些像素缓存数据进行加权求和、求均值、滤波和/或像素位置关系映射的硬化计算,以得到基于超分辨处理的第二硬化计算结果。再之后,超分辨单元(即第三软件)即可汇总该第二硬化计算结果及上述第一硬化计算结果,并通过数据排列、赋值等软件操作,得到经过图4所示的重压缩的图像。Specifically, in the process of recompressing the first image, in response to obtaining the compression parameters associated with the user's eye movement signal and the first image to be processed, the super-resolution unit (i.e., the third software) can first divide the first image into multiple partitions based on the user's gaze point position according to the compression parameters of the updated compression model, and respectively determine the coordinate range of each partition, as well as the upsampling magnification of each focus partition containing the adjacent gaze point position. Afterwards, the super-resolution unit (i.e., the third software) can obtain the processing parameters of the downsampling operation from each corresponding memory according to the coordinates of each pixel in each partition, and obtain the pixel cache data required for super-resolution processing from each corresponding cache memory. Here, the pixel cache data includes but is not limited to the cache data of at least one nearby pixel of the current frame, and the cache data of these nearby pixels in at least one historical frame. Afterwards, the super-resolution unit (i.e., the third software) can sequentially input the acquired processing parameters and the pixel cache data of each pixel into one or more of the weighted summation circuit, the mean value calculation circuit, the filtering circuit, and the pixel position relationship mapping circuit, and perform the weighted summation, mean value calculation, filtering, and/or pixel position relationship mapping hardening calculations on the pixel cache data to obtain a second hardening calculation result based on the super-resolution processing. Afterwards, the super-resolution unit (i.e., the third software) can summarize the second hardening calculation result and the first hardening calculation result, and obtain the heavily compressed image shown in FIG4 through software operations such as data arrangement and value assignment.
如图4所示,在经过该基于眼动信息的重压缩处理之后,第一图像中包含临近凝视点位置的至少一个关注分区被基于相同和/或不同的上采样倍率进行了超分辨处理,从而在这些关注分区中达到40像素点/度(pixels per degree)的4K显示显示的等效分辨率。同时,第一图像中远离该凝视点位置的至少一个非关注分区则被基于相同和/或不同的下采样倍率进行了压缩处理,从而降低这些非关注分区的数据处理负荷和/或数据传输负荷。As shown in FIG4 , after the recompression processing based on the eye movement information, at least one focus partition in the first image containing a position close to the gaze point is super-resolved based on the same and/or different upsampling ratios, thereby achieving a 4K display equivalent resolution of 40 pixels per degree in these focus partitions. At the same time, at least one non-focus partition in the first image far from the gaze point is compressed based on the same and/or different downsampling ratios, thereby reducing the data processing load and/or data transmission load of these non-focus partitions.
相比于需要逐帧、逐分区计算并缓存各相关像素的像素数据的扭曲纠正及图像压缩的纯软件方案,本发明通过设计并复用缓存存储器、加权求和电路、均值计算电路、滤波电路、像素位置关系的映射电路的方式,有效提升了各帧、各分区的像素缓存数据的复用率,并有效降低了软件单元的数据处理负荷,从而在整体上降低了协处理器10的数据处理及传输负荷,以利于协处理器10基于有限的软 硬件资源来满足XR显示设备的高分辨率显示需求。Compared with the pure software solution of distortion correction and image compression that needs to calculate and cache the pixel data of each relevant pixel frame by frame and partition by partition, the present invention effectively improves the reuse rate of pixel cache data of each frame and each partition by designing and reusing the cache memory, weighted summation circuit, mean value calculation circuit, filtering circuit, and pixel position relationship mapping circuit, and effectively reduces the data processing load of the software unit, thereby reducing the data processing and transmission load of the coprocessor 10 as a whole, so as to facilitate the coprocessor 10 to realize the image processing based on limited software. Hardware resources to meet the high-resolution display requirements of XR display devices.
如图2所示,在获得经过协处理器10的扭曲纠正处理及压缩处理的第二图像之后,协处理器10即可将该第二图像传输到显示终端50,以进行扩展现实图像的高分辨率显示。As shown in FIG. 2 , after obtaining the second image that has been subjected to distortion correction and compression processing by the coprocessor 10 , the coprocessor 10 may transmit the second image to the display terminal 50 for high-resolution display of the augmented reality image.
具体来说,针对主处理器30向协处理器10发送经过凝视点渲染压缩后的第一图像的实施例,协处理器10的显示管线中还可以优选地集成有解压缩模块。上述显示驱动软件和/或固件算力平台连接该解压缩模块。显示管线中配置的软件处理单元及至少一个图像处理硬化单元,也经由该解压缩模块连接显示终端50。如此,在向显示终端50传输第二图像的过程中,配置于协处理器10的解压缩模块可以首先经由显示管线中集成的软件处理单元及至少一个图像处理硬化单元,获取经过扭曲纠正处理及压缩处理的第二图像,再经由上述显示驱动软件和/或固件算力平台获取更新的压缩模型的压缩参数,从而根据该更新的压缩模型的压缩参数,对经过扭曲纠正处理及压缩处理的第二图像进行解压缩处理,以获得第三图像。之后,协处理器10可以将经过解压缩处理的第三图像传输到显示终端50,以进行扩展现实图像的高分辨率显示。Specifically, for the embodiment in which the main processor 30 sends the first image compressed by gaze point rendering to the coprocessor 10, a decompression module may be preferably integrated in the display pipeline of the coprocessor 10. The above-mentioned display driver software and/or firmware computing power platform is connected to the decompression module. The software processing unit and at least one image processing hardening unit configured in the display pipeline are also connected to the display terminal 50 via the decompression module. In this way, in the process of transmitting the second image to the display terminal 50, the decompression module configured in the coprocessor 10 can first obtain the second image after distortion correction processing and compression processing through the software processing unit and at least one image processing hardening unit integrated in the display pipeline, and then obtain the compression parameters of the updated compression model through the above-mentioned display driver software and/or firmware computing power platform, so as to decompress the second image after distortion correction processing and compression processing according to the compression parameters of the updated compression model to obtain the third image. Afterwards, the coprocessor 10 can transmit the decompressed third image to the display terminal 50 for high-resolution display of the extended reality image.
进一步地,针对主处理器30向协处理器10发送经过凝视点渲染压缩后的第一图像的实施例,上述解压缩模块还可以被优选地配置于显示终端50或其显示驱动芯片40上。以解压缩模块被配置于显示驱动芯片40为例,该解压缩模块可以分别连接协处理器10的显示管线、显示驱动软件和/或固件算力平台。在向显示终端50传输第二图像的过程中,配置于显示驱动芯片40的解压缩模块可以首先经由显示管线中集成的软件处理单元及至少一个图像处理硬化单元,获取经过扭曲纠正处理及压缩处理的第二图像,再经由上述显示驱动软件和/或固件算力平台获取用户的眼动信号、凝视点位置和/或更新的压缩模型的压缩参数,从而根据该眼动信号、凝视点位置和/或更新的压缩模型的压缩参数,对经过扭曲纠正处理及压缩处理的第二图像进行解压缩处理,以获得第三图像。之后,显示驱动芯片40可以根据经过解压缩处理的第三图像,直接驱动OLED显示屏/LED显示屏等显示终端50的像素阵列电路,以在该显示终端50上进行扩展现实图像的高分辨率显示。Further, for the embodiment in which the main processor 30 sends the first image compressed by gaze point rendering to the coprocessor 10, the above-mentioned decompression module can also be preferably configured on the display terminal 50 or its display driver chip 40. Taking the decompression module configured on the display driver chip 40 as an example, the decompression module can be connected to the display pipeline, display driver software and/or firmware computing platform of the coprocessor 10 respectively. In the process of transmitting the second image to the display terminal 50, the decompression module configured on the display driver chip 40 can first obtain the second image after distortion correction processing and compression processing through the software processing unit and at least one image processing hardening unit integrated in the display pipeline, and then obtain the user's eye movement signal, gaze point position and/or updated compression model compression parameters through the above-mentioned display driver software and/or firmware computing platform, so as to decompress the second image after distortion correction processing and compression processing according to the eye movement signal, gaze point position and/or updated compression model compression parameters to obtain the third image. Afterwards, the display driver chip 40 can directly drive the pixel array circuit of the display terminal 50 such as the OLED display screen/LED display screen according to the decompressed third image, so as to display the extended reality image with high resolution on the display terminal 50 .
更进一步地,在一些实施例中,上述解压缩模块还可以被更优地配置于显示终端50的像素阵列电路。如此,在扩展现实显示装置的整个架构中传递的图像数 据都是经过凝视点渲染压缩的,能够进一步降低整个构架的数据传输负荷及数据处理负荷。该配置于显示终端50的像素阵列电路的解压缩模块的工作原理与上述配置于显示驱动芯片40的实施例类似,在此不做赘述。Furthermore, in some embodiments, the decompression module can be configured in a pixel array circuit of the display terminal 50. In this way, the image data transmitted in the entire architecture of the augmented reality display device is The data is compressed by gaze point rendering, which can further reduce the data transmission load and data processing load of the entire architecture. The working principle of the decompression module of the pixel array circuit configured in the display terminal 50 is similar to that of the embodiment configured in the display driver chip 40, which will not be repeated here.
最后,本领域的技术人员可以理解,上述采用协处理器10的扩展现实显示装置的系统架构,只是本发明提供的一种非限制性的实施方式,旨在清楚地展示本发明的主要构思,并提供一种便于公众实施的具体方案,而非用于限制性本发明的保护范围。Finally, those skilled in the art will appreciate that the system architecture of the extended reality display device using the coprocessor 10 is only a non-limiting implementation provided by the present invention, which is intended to clearly demonstrate the main concept of the present invention and provide a specific solution that is easy for the public to implement, rather than to limit the scope of protection of the present invention.
可选地,在另一些实施例中,本发明的第一方面提供的上述图像处理器还可以经由软件程序和硬件单元的形式,被集成于本发明的第四方面提供的上述扩展现实显示装置的中央处理单元(CPU)、图形处理单元(GPU)等主处理器单元中,以达到相同的技术效果,在此不再一一赘述。Optionally, in other embodiments, the above-mentioned image processor provided in the first aspect of the present invention can also be integrated into the main processor units such as the central processing unit (CPU) and graphics processing unit (GPU) of the above-mentioned extended reality display device provided in the fourth aspect of the present invention in the form of software programs and hardware units to achieve the same technical effects, which will not be repeated here.
尽管为使解释简单化将上述方法图示并描述为一系列动作,但是应理解并领会,这些方法不受动作的次序所限,因为根据一个或多个实施例,一些动作可按不同次序发生和/或与来自本文中图示和描述或本文中未图示和描述但本领域技术人员可以理解的其他动作并发地发生。Although the above methods are illustrated and described as a series of actions for simplicity of explanation, it should be understood and appreciated that these methods are not limited by the order of the actions, because according to one or more embodiments, some actions may occur in a different order and/or concurrently with other actions from those illustrated and described herein or not illustrated and described herein but understandable to those skilled in the art.
本领域技术人员将可理解,信息、信号和数据可使用各种不同技术和技艺中的任何技术和技艺来表示。例如,以上描述通篇引述的数据、指令、命令、信息、信号、位(比特)、码元、和码片可由电压、电流、电磁波、磁场或磁粒子、光场或光学粒子、或其任何组合来表示。Those skilled in the art will appreciate that information, signals, and data may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips cited throughout the above description may be represented by voltage, current, electromagnetic waves, magnetic fields or magnetic particles, optical fields or optical particles, or any combination thereof.
本领域技术人员将进一步领会,结合本文中所公开的实施例来描述的各种解说性逻辑板块、模块、电路、和算法步骤可实现为电子硬件、计算机软件、或这两者的组合。为清楚地解说硬件与软件的这一可互换性,各种解说性组件、框、模块、电路、和步骤在上面是以其功能性的形式作一般化描述的。此类功能性是被实现为硬件还是软件取决于具体应用和施加于整体系统的设计约束。技术人员对于每种特定应用可用不同的方式来实现所描述的功能性,但这样的实现决策不应被解读成导致脱离了本发明的范围。Those skilled in the art will further appreciate that the various illustrative logic blocks, modules, circuits, and algorithm steps described in conjunction with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or a combination of the two. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps are generally described above in terms of their functionality. Whether such functionality is implemented as hardware or software depends on the specific application and the design constraints imposed on the overall system. The technician can implement the described functionality in different ways for each specific application, but such implementation decisions should not be interpreted as resulting in a departure from the scope of the present invention.
结合本文所公开的实施例描述的各种解说性逻辑模块和电路,可用通用处理器、NPU AI网络模型计算加速处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或其它可编程逻辑器件、分立的门或晶 体管逻辑、分立的硬件组件、或其设计成执行本文所描述功能的任何组合来实现或执行。通用处理器可以是微处理器,但在替换方案中,该处理器可以是任何常规的处理器、控制器、微控制器、或状态机。处理器还可以被实现为计算设备的组合,例如DSP与微处理器的组合、多个微处理器、与DSP核心协作的一个或多个微处理器、或任何其他此类配置。The various illustrative logic modules and circuits described in conjunction with the embodiments disclosed herein may be implemented using a general purpose processor, an NPU AI network model computing acceleration processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other programmable logic devices, discrete gates, or chips. The processor may be implemented or executed by a combination of a processor, a processor logic, a discrete hardware component, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in an alternative solution, the processor may be any conventional processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors cooperating with a DSP core, or any other such configuration.
结合本文中公开的实施例描述的方法或算法的步骤可直接在硬件中、在由处理器执行的软件模块中、或在这两者的组合中体现。软件模块可驻留在RAM存储器、闪存、ROM存储器、EPROM存储器、EEPROM存储器、寄存器、硬盘、可移动盘、CD-ROM、或本领域中所知的任何其他形式的存储介质中。示例性存储介质耦合到处理器以使得该处理器能从/向该存储介质读取和写入信息。在替换方案中,存储介质可以被整合到处理器。处理器和存储介质可驻留在ASIC中。ASIC可驻留在用户终端中。在替换方案中,处理器和存储介质可作为分立组件驻留在用户终端中。The steps of the method or algorithm described in conjunction with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software module may reside in a RAM memory, a flash memory, a ROM memory, an EPROM memory, an EEPROM memory, a register, a hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to a processor so that the processor can read and write information from/to the storage medium. In an alternative, a storage medium may be integrated into a processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In an alternative, the processor and the storage medium may reside in a user terminal as discrete components.
在一个或多个示例性实施例中,所描述的功能可在硬件、软件、固件或其任何组合中实现。如果在软件中实现为计算机程序产品,则各功能可以作为一条或更多条指令或代码存储在计算机可读介质上或藉其进行传送。计算机可读介质包括计算机存储介质和通信介质两者,其包括促成计算机程序从一地向另一地转移的任何介质。存储介质可以是能被计算机访问的任何可用介质。作为示例而非限定,这样的计算机可读介质可包括RAM、ROM、EEPROM、CD-ROM或其它光盘存储、磁盘存储或其它磁存储设备、或能被用来携带或存储指令或数据结构形式的合意程序代码且能被计算机访问的任何其它介质。任何连接也被正当地称为计算机可读介质。例如,如果软件是使用同轴电缆、光纤电缆、双绞线、数字订户线(DSL)、或诸如红外、无线电、以及微波之类的无线技术从web网站、服务器、或其它远程源传送而来,则该同轴电缆、光纤电缆、双绞线、DSL、或诸如红外、无线电、以及微波之类的无线技术就被包括在介质的定义之中。如本文中所使用的盘(disk)和碟(disc)包括压缩碟(CD)、激光碟、光碟、数字多用碟(DVD)、软盘和蓝光碟,其中盘(disk)往往以磁的方式再现数据,而碟(disc)用激光以光学方式再现数据。上述的组合也应被包括在计算机可读介质的范围内。 In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented as a computer program product in software, each function may be stored on or transmitted by a computer-readable medium as one or more instructions or codes. Computer-readable media include both computer storage media and communication media, including any media that facilitates the transfer of a computer program from one place to another. Storage media may be any available media that can be accessed by a computer. As an example and not limitation, such a computer-readable medium may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, disk storage or other magnetic storage device, or any other medium that can be used to carry or store the desired program code in the form of an instruction or data structure and can be accessed by a computer. Any connection is also properly referred to as a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwaves, the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwaves are included in the definition of the medium. Disk and disc as used herein include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, wherein disk often reproduces data magnetically, while disc reproduces data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
提供对本公开的先前描述是为使得本领域任何技术人员皆能够制作或使用本公开。对本公开的各种修改对本领域技术人员来说都将是显而易见的,且本文中所定义的普适原理可被应用到其他变体而不会脱离本公开的精神或范围。由此,本公开并非旨在被限定于本文中所描述的示例和设计,而是应被授予与本文中所公开的原理和新颖性特征相一致的最广范围。 The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to those skilled in the art, and the general principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein, but should be granted the widest scope consistent with the principles and novel features disclosed herein.

Claims (12)

  1. 一种图像处理器,其特征在于,包括显示管线,其中,所述显示管线中集成有软件处理单元及至少一个图像处理硬化单元,并被配置为:An image processor, characterized in that it comprises a display pipeline, wherein the display pipeline integrates a software processing unit and at least one image processing hardening unit, and is configured as follows:
    获取用户的眼动信号及待处理的第一图像;Acquire the user's eye movement signal and the first image to be processed;
    采用所述软件处理单元中配置的第一软件,以及所述至少一个图像处理硬化单元,对所述第一图像进行扭曲纠正处理;Using the first software configured in the software processing unit and the at least one image processing hardening unit to perform distortion correction processing on the first image;
    根据所述眼动信号,采用所述软件处理单元中配置的第二软件,以及所述至少一个图像处理硬化单元,对所述第一图像进行压缩处理;以及According to the eye movement signal, using the second software configured in the software processing unit and the at least one image processing hardening unit to compress the first image; and
    将经过所述扭曲纠正处理及所述压缩处理所获得的第二图像传输到显示终端,以进行图像显示。The second image obtained through the distortion correction process and the compression process is transmitted to a display terminal for image display.
  2. 如权利要求1所述的图像处理器,其特征在于,所述至少一个图像处理硬化单元选自缓存存储器、加权求和电路、均值计算电路、滤波电路、像素位置关系的映射电路中的至少一者。The image processor according to claim 1, characterized in that the at least one image processing hardening unit is selected from at least one of a cache memory, a weighted summation circuit, a mean value calculation circuit, a filtering circuit, and a pixel position relationship mapping circuit.
  3. 如权利要求1所述的图像处理器,其特征在于,所述显示管线连接扩展现实显示装置的主处理器,所述获取用户的眼动信号及待处理的第一图像的步骤包括:The image processor according to claim 1, wherein the display pipeline is connected to a main processor of the extended reality display device, and the step of obtaining the user's eye movement signal and the first image to be processed comprises:
    经由所述主处理器获取经过凝视点渲染压缩的实景图像和/或虚拟图像,其中,所述凝视点渲染压缩是基于所述眼动信号来实现。A real scene image and/or a virtual image compressed by gaze point rendering is obtained via the main processor, wherein the gaze point rendering compression is implemented based on the eye movement signal.
  4. 如权利要求1所述的图像处理器,其特征在于,还包括显示驱动软件和/或固件算力平台,所述显示驱动软件和/或所述固件算力平台分别连接眼动仪及所述显示管线,所述获取用户的眼动信号及待处理的第一图像的步骤包括:The image processor according to claim 1, further comprising display driver software and/or a firmware computing platform, wherein the display driver software and/or the firmware computing platform are respectively connected to an eye tracker and the display pipeline, and the step of obtaining the user's eye movement signal and the first image to be processed comprises:
    经由所述显示驱动软件和/或所述固件算力平台,从所述眼动仪获取所述眼动信号,并进行眼动追踪计算,以确定凝视点位置;Acquiring the eye movement signal from the eye tracker via the display driver software and/or the firmware computing platform, and performing eye movement tracking calculation to determine the gaze point position;
    根据所述凝视点位置更新凝视点信息,以构建更新的压缩模型;以及updating gaze point information according to the gaze point position to construct an updated compression model; and
    将所述更新的压缩模型的压缩参数传输到所述显示管线。The compression parameters of the updated compression model are transmitted to the display pipeline.
  5. 如权利要求4所述的图像处理器,其特征在于,所述根据所述眼动信号,采用所述软件处理单元中配置的第二软件,以及所述至少一个图像处理硬化单元,对所述第一图像进行压缩处理的步骤包括:The image processor according to claim 4, characterized in that the step of compressing the first image according to the eye movement signal using the second software configured in the software processing unit and the at least one image processing hardening unit comprises:
    根据所述更新的压缩模型的压缩参数,确定关于所述凝视点位置的多个分区的坐标及各所述分区的下采样倍率;以及Determining coordinates of a plurality of partitions about the gaze point position and a downsampling ratio of each of the partitions according to the compression parameters of the updated compression model; and
    根据各所述分区的坐标及下采样倍率,采用所述软件处理单元中配置的所述 第二软件配合所述至少一个图像处理硬化单元,对所述第一图像进行所述压缩处理。According to the coordinates and downsampling ratio of each partition, the software processing unit is configured to The second software cooperates with the at least one image processing hardening unit to perform the compression process on the first image.
  6. 如权利要求5所述的图像处理器,其特征在于,所述软件处理单元中还配置有第三软件,所述显示管线还被配置为:The image processor according to claim 5, characterized in that the software processing unit is further configured with third software, and the display pipeline is further configured as:
    根据所述更新的压缩模型的压缩参数,确定关于所述凝视点位置的多个分区的坐标及各所述分区的上采样倍率;以及Determining coordinates of a plurality of partitions about the gaze point position and an upsampling ratio of each of the partitions according to the compression parameters of the updated compression model; and
    根据各所述分区的坐标及上采样倍率,采用所述软件处理单元中配置的所述第三软件配合所述至少一个图像处理硬化单元,对所述第一图像进行超分辨处理。According to the coordinates of each of the partitions and the upsampling ratio, the first image is subjected to super-resolution processing by using the third software configured in the software processing unit in cooperation with the at least one image processing hardening unit.
  7. 如权利要求5或6所述的图像处理器,其特征在于,多个所述分区具有相同和/或不同的下采样倍率,和/或多个所述分区具有相同和/或不同的上采样倍率。The image processor according to claim 5 or 6, characterized in that the multiple partitions have the same and/or different downsampling ratios, and/or the multiple partitions have the same and/or different upsampling ratios.
  8. 如权利要求4或5所述的图像处理器,其特征在于,所述显示终端配置有解压缩模块,所述显示管线以及所述显示驱动软件和/或所述固件算力平台分别连接所述解压缩模块,所述将经过所述扭曲纠正处理及所述压缩处理的所获得的第二图像传输到显示终端,以进行图像显示的步骤包括:The image processor according to claim 4 or 5, characterized in that the display terminal is configured with a decompression module, the display pipeline and the display driver software and/or the firmware computing power platform are respectively connected to the decompression module, and the step of transmitting the second image obtained after the distortion correction processing and the compression processing to the display terminal for image display comprises:
    经由所述显示管线,将经过所述扭曲纠正处理及所述压缩处理的第二图像传输到所述解压缩模块;Transmitting the second image after the distortion correction processing and the compression processing to the decompression module via the display pipeline;
    经由所述显示驱动软件和/或所述固件算力平台,将所述眼动信号、所述凝视点位置和/或所述更新的压缩模型的压缩参数传输到所述解压缩模块;Transmitting the eye movement signal, the gaze point position and/or the compression parameters of the updated compression model to the decompression module via the display driver software and/or the firmware computing platform;
    采用所述解压缩模块,根据所述眼动信号、所述凝视点位置和/或所述更新的压缩模型的压缩参数,对经过所述扭曲纠正处理及所述压缩处理的第二图像进行解压缩处理,以获得第三图像;以及Using the decompression module, decompressing the second image after the distortion correction and compression processing according to the eye movement signal, the gaze point position and/or the compression parameter of the updated compression model to obtain a third image; and
    根据经过所述解压缩处理的第三图像,在所述显示终端进行图像显示。An image is displayed on the display terminal according to the decompressed third image.
  9. 如权利要求4或5所述的图像处理器,其特征在于,所述显示管线中还集成有解压缩模块,所述显示驱动软件和/或所述固件算力平台连接所述解压缩模块,所述软件处理单元及所述至少一个图像处理硬化单元经由所述解压缩模块连接所述显示终端,所述将经过所述扭曲纠正处理及所述压缩处理所获得的第二图像传输到显示终端,以进行图像显示的步骤包括:The image processor according to claim 4 or 5, characterized in that a decompression module is also integrated in the display pipeline, the display driver software and/or the firmware computing power platform is connected to the decompression module, the software processing unit and the at least one image processing hardening unit are connected to the display terminal via the decompression module, and the step of transmitting the second image obtained after the distortion correction processing and the compression processing to the display terminal for image display comprises:
    经由所述软件处理单元及所述至少一个图像处理硬化单元,将经过所述扭曲纠正处理及所述压缩处理的第二图像传输到所述解压缩模块;Transmitting the second image after the distortion correction processing and the compression processing to the decompression module via the software processing unit and the at least one image processing hardening unit;
    经由所述显示驱动软件和/或所述固件算力平台,将所述更新的压缩模型的压缩参数传输到所述解压缩模块;Transmitting compression parameters of the updated compression model to the decompression module via the display driver software and/or the firmware computing platform;
    采用所述解压缩模块,根据所述更新的压缩模型的压缩参数,对经过所述扭 曲纠正处理及所述压缩处理的第二图像进行解压缩处理,以获得第三图像;以及Using the decompression module, according to the compression parameters of the updated compression model, the decompressing the second image processed by the correction and compression to obtain a third image; and
    将经过所述解压缩处理的第三图像传输到所述显示终端,以进行所述图像显示。The decompressed third image is transmitted to the display terminal for image display.
  10. 一种图像处理方法,其特征在于,包括以下步骤:An image processing method, characterized in that it comprises the following steps:
    获取眼动信号及待处理的第一图像;Acquiring an eye movement signal and a first image to be processed;
    采用显示管线中集成的软件处理单元中配置的第一软件,以及集成的至少一个图像处理硬化单元,对所述第一图像进行扭曲纠正处理;Using first software configured in a software processing unit integrated in a display pipeline and at least one integrated image processing hardening unit to perform distortion correction processing on the first image;
    根据所述眼动信号,采用所述软件处理单元中配置的第二软件,以及所述至少一个图像处理硬化单元,对所述第一图像进行压缩处理;以及According to the eye movement signal, using the second software configured in the software processing unit and the at least one image processing hardening unit to compress the first image; and
    将经过所述扭曲纠正处理及所述压缩处理的第二图像传输到显示终端,以进行图像显示。The second image that has undergone the distortion correction processing and the compression processing is transmitted to a display terminal for image display.
  11. 一种计算机可读存储介质,其上存储有计算机指令,其特征在于,所述计算机指令被处理器执行时,实施如权利要求10所述的图像处理方法。A computer-readable storage medium having computer instructions stored thereon, wherein when the computer instructions are executed by a processor, the image processing method according to claim 10 is implemented.
  12. 一种扩展现实显示装置,其特征在于,包括:An extended reality display device, comprising:
    眼动仪,用于采集用户的眼动信号;Eye tracker, used to collect user's eye movement signals;
    主处理器,输出经过或未经过凝视点渲染压缩的第一图像,其中,所述凝视点渲染压缩是基于所述眼动信号来实现;A main processor outputs a first image that has been subjected to or has not been subjected to gaze point rendering compression, wherein the gaze point rendering compression is implemented based on the eye movement signal;
    如权利要求1~9中任一项所述的图像处理器,其中,所述图像处理器分别连接所述眼动仪及所述主处理器,以获取所述第一图像及所述眼动信号;以及The image processor according to any one of claims 1 to 9, wherein the image processor is connected to the eye tracker and the main processor respectively to obtain the first image and the eye movement signal; and
    显示终端,连接所述图像处理器,以获取并显示经过所述图像处理器的扭曲纠正处理及压缩处理的第二图像。 The display terminal is connected to the image processor to obtain and display the second image that has been subjected to the distortion correction and compression processing by the image processor.
PCT/CN2023/106699 2022-09-27 2023-07-11 Image processor, processing method, storage medium and augmented reality display apparatus WO2024066659A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211184446.5 2022-09-27
CN202211184446.5A CN117834829A (en) 2022-09-27 2022-09-27 Image processor, processing method, storage medium, and augmented reality display device

Publications (1)

Publication Number Publication Date
WO2024066659A1 true WO2024066659A1 (en) 2024-04-04

Family

ID=90475897

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/106699 WO2024066659A1 (en) 2022-09-27 2023-07-11 Image processor, processing method, storage medium and augmented reality display apparatus

Country Status (2)

Country Link
CN (1) CN117834829A (en)
WO (1) WO2024066659A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160238852A1 (en) * 2015-02-13 2016-08-18 Castar, Inc. Head mounted display performing post render processing
US20160364904A1 (en) * 2015-06-12 2016-12-15 Google Inc. Electronic display stabilization for head mounted display
WO2018092147A1 (en) * 2016-11-21 2018-05-24 Karma Touch 2016 Ltd. Compressing and decompressing an image based on gaze point
KR20180078610A (en) * 2016-12-30 2018-07-10 주식회사 펀진 Information providing apparatus having a viewpoint tracking function
EP3521899A1 (en) * 2018-02-03 2019-08-07 Facebook Technologies, LLC Apparatus, system, and method for achieving intraframe image processing in head-mounted displays
CN110192391A (en) * 2017-01-19 2019-08-30 华为技术有限公司 A kind of method and apparatus of processing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160238852A1 (en) * 2015-02-13 2016-08-18 Castar, Inc. Head mounted display performing post render processing
US20160364904A1 (en) * 2015-06-12 2016-12-15 Google Inc. Electronic display stabilization for head mounted display
WO2018092147A1 (en) * 2016-11-21 2018-05-24 Karma Touch 2016 Ltd. Compressing and decompressing an image based on gaze point
KR20180078610A (en) * 2016-12-30 2018-07-10 주식회사 펀진 Information providing apparatus having a viewpoint tracking function
CN110192391A (en) * 2017-01-19 2019-08-30 华为技术有限公司 A kind of method and apparatus of processing
EP3521899A1 (en) * 2018-02-03 2019-08-07 Facebook Technologies, LLC Apparatus, system, and method for achieving intraframe image processing in head-mounted displays

Also Published As

Publication number Publication date
CN117834829A (en) 2024-04-05

Similar Documents

Publication Publication Date Title
US8947432B2 (en) Accelerated rendering with temporally interleaved details
US8063901B2 (en) Method and apparatus for efficient client-server visualization of multi-dimensional data
US10650283B2 (en) Electronic apparatus and control method thereof
TW200426750A (en) Image data set with embedded pre-subpixel rendered image
WO2015115018A1 (en) Super resolution processing method, device, and program for single interaction multiple data-type super parallel computation processing device, and storage medium
CN102572359B (en) Auto-regressive edge-directed interpolation with backward projection constraint
JP2006014341A (en) Method and apparatus for storing image data using mcu buffer
CN113946402A (en) Cloud mobile phone acceleration method, system, equipment and storage medium based on rendering separation
EP3856376A1 (en) Asynchronous space warp for remotely rendered vr
KR20210020387A (en) Electronic apparatus and control method thereof
WO2021258530A1 (en) Image resolution processing method, device, apparatus, and readable storage medium
KR20030013286A (en) Device for graphically processing video objects
WO2024066659A1 (en) Image processor, processing method, storage medium and augmented reality display apparatus
WO2024032331A9 (en) Image processing method and apparatus, electronic device, and storage medium
US9275487B1 (en) System and method for performing non-affine deformations
WO2024066661A1 (en) Image processor, image processing method, storage medium and extended reality display apparatus
WO2023024660A1 (en) Image enhancement method and apparatus
JP2011509455A (en) End-oriented image processing
CN115375539A (en) Image resolution enhancement, multi-frame image super-resolution system and method
WO2024066660A1 (en) Image processor, image processing method, storage medium, and extended reality display apparatus
CN113556496B (en) Video resolution improving method and device, storage medium and electronic equipment
US20140362097A1 (en) Systems and methods for hardware-accelerated key color extraction
US20210358091A1 (en) Method and apparatus for geometric smoothing
WO2023272414A1 (en) Image processing method and image processing apparatus
KR102669366B1 (en) Video processing system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23869906

Country of ref document: EP

Kind code of ref document: A1