CN110322424B - High-resolution image processing method and device, VR image display method and VR equipment - Google Patents

High-resolution image processing method and device, VR image display method and VR equipment Download PDF

Info

Publication number
CN110322424B
CN110322424B CN201910621305.7A CN201910621305A CN110322424B CN 110322424 B CN110322424 B CN 110322424B CN 201910621305 A CN201910621305 A CN 201910621305A CN 110322424 B CN110322424 B CN 110322424B
Authority
CN
China
Prior art keywords
image
resolution
texture
sub
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910621305.7A
Other languages
Chinese (zh)
Other versions
CN110322424A (en
Inventor
郭子兴
高岩
陈玉来
胡海宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Aiguan Vision Technology Co ltd
Original Assignee
Anhui Aiguan Vision Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Aiguan Vision Technology Co ltd filed Critical Anhui Aiguan Vision Technology Co ltd
Priority to CN201910621305.7A priority Critical patent/CN110322424B/en
Publication of CN110322424A publication Critical patent/CN110322424A/en
Application granted granted Critical
Publication of CN110322424B publication Critical patent/CN110322424B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/49Analysis of texture based on structural texture description, e.g. using primitives or placement rules

Abstract

The invention relates to a high-resolution image processing method and device, a VR image display method and VR equipment. The high-resolution image processing method comprises the steps of segmenting, splicing and rendering an original image with high resolution to obtain a readable image corresponding to the original image, wherein the readable image obtained through processing is convenient to be read and displayed from a frame buffer space by image display equipment or image processing software. The high-resolution image processing method and the high-resolution image processing device have the characteristic of self-adaptive parameter setting, and are favorable for fully exerting the performance of the image display equipment. Therefore, in order to display the high-resolution VR image material, the VR image display method and the VR equipment adopt the high-resolution image processing method to obtain the readable image which is stored in the cache, so that the readable image is convenient to read and update in real time, and the VR image display method and the VR equipment are beneficial to improving the VR display effect.

Description

High-resolution image processing method and device, VR image display method and VR equipment
Technical Field
The present invention relates to the field of image display technologies, and in particular, to a high resolution image processing method, a high resolution image processing apparatus, a VR image display method, and a VR device.
Background
The Resolution (Image Resolution) of an Image represents the amount of information stored in the Image, which can be measured by the number of pixels per inch (PPI or DPI). The resolution of the image determines the quality of the image output. Under the same picture, the image resolution is improved, so that the image definition is higher and the details are clearer. For example, high-resolution image display may be used in situations where images (such as cultural details) with precise details can be observed, situations where both the overall view is convenient and local details are displayed with very high resolution, and the like, and may provide better viewing experience for viewers.
However, it has been found that if the resolution of the image is increased to some extent (for example, to 20k × 50k), problems in image processing and display may be encountered. For example, image processing software typically obtains image data from memory for processing and forms image frames that are stored in a frame buffer space prior to output. However, if the resolution of the image is too high, the image data will be very large, and the image display device or the image processing software will often jam or even crash when directly processing the image data, i.e. it is difficult to directly process the image data with high resolution and display the image. In addition, limited by the manufacturing process of the display, the screen resolution of the display is limited, and if a high-resolution image is to be displayed, at present, an ultra-high-resolution image is usually displayed by splicing a plurality of display cards and a plurality of display screens, however, a system provided with a plurality of display cards and a plurality of display screens has a complex structure, occupies a large physical space, has high hardware cost, has poor system adjustability, and is limited by the bandwidth of transmission signals and processing performance, and the resolution of the displayed image is still limited.
Thus, there remains a need in the art for processing and displaying high resolution image data.
Disclosure of Invention
The invention provides a high-resolution image processing method and a high-resolution image processing device for processing high-resolution image data to obtain a readable image with high resolution. In addition, the invention also provides a VR image display method and VR equipment, which can output images with high resolution.
According to a first aspect of the present invention, there is provided a high resolution image processing method comprising the steps of:
providing an image material, wherein the image material comprises at least one frame of high-resolution original image;
dividing a frame of original image to be processed into a plurality of sub-images arranged according to numbers;
creating a plurality of texture units, wherein each texture unit corresponds to each sub-image one by one and has the same resolution;
setting the size of an output frame, and creating plane units which correspond to the texture units one to one, wherein the sum of the areas of all the plane units is equal to the area of the output frame, and the area of each plane unit is in direct proportion to the number of pixels of the corresponding texture unit;
setting the center of the output frame, and obtaining the position of each plane unit in the output frame according to the position of the sub-image corresponding to each texture unit in the original image; and
and drawing the texture unit corresponding to the plane unit on each plane unit, and rendering to obtain a readable image corresponding to the original image.
Optionally, the resolution of each frame of the original image is W × H, W is a transverse resolution, H is a longitudinal resolution, and at least one of W and H is greater than 8000.
Optionally, the method for dividing the original image of a frame to be processed into a plurality of sub-images arranged according to numbers includes:
setting a standard texture scale, wherein the resolution of the standard texture scale is w x h, w is the transverse resolution, h is the longitudinal resolution, and the standard texture scale does not exceed the maximum size of a single texture supported by image rendering; and
respectively calculating H/H and W/W, taking an integer part of a calculation result, if a decimal part is added with 1 on the basis of the integer part, thereby dividing the original image into a plurality of sub-images distributed in m rows and n columns, wherein the resolution of at least one sub-image is W x H, and m and n are integers.
Optionally, the size of the output frame, the size of the plane unit and the output frameThe resolutions of the texture units corresponding to the plane units satisfy the following two equation relations:
Figure BDA0002125566080000031
and
Figure BDA0002125566080000032
wherein size _ W is the width of the output frame, size _ H is the height of the output frame, f (i, j) u For the lateral resolution of the texture unit corresponding to the sub-image in row i and column j, f (i, j) v For the vertical resolution of texture elements corresponding to the sub-image in row i and column j, size _ w (i, j) is the width of the plane element corresponding to the sub-image in row i and column j, size _ h (i, j) is the height of the plane element corresponding to the sub-image in row i and column j, i ∈ {1,2., m }, j ∈ {1,2.,.
Optionally, the method for obtaining the position of each plane unit in the output frame according to the position of the sub-image corresponding to each texture unit in the original image includes:
setting an image origin pixel of the original image, wherein the image origin pixel is positioned at the top left corner vertex position of the sub-images in the first row and the first column;
calculating the offset of the center point pixel of the sub-image corresponding to each texture unit from the original point pixel of the image, and obtaining the offset of the center point pixel of the sub-image corresponding to each texture unit from the center point pixel of the original image; and
and keeping the aspect ratio of the original image, so that the offset of the plane unit relative to the center of the output frame in each group of corresponding plane units and texture units is the same as the offset proportion of the center point pixel of the sub-image corresponding to the texture unit, thereby obtaining the position of each plane unit according to the offset relative to the center of the output frame.
Optionally, the size of the output frame is adjustable, and the resolution of the standard texture scale is adjustable.
According to a second aspect of the present invention, there is provided a high resolution image processing apparatus, the apparatus comprising a processor and a memory configured to store executable instructions of the processor, which when executed by the processor, perform the above-described high resolution image processing method.
According to a third aspect of the present invention, there is provided a high resolution image processing apparatus, the apparatus comprising:
the image processing device comprises an internal memory, a processing unit and a processing unit, wherein image materials are stored in the internal memory and comprise at least one frame of high-resolution original image;
the segmentation module is configured to segment a frame of original image to be processed into a plurality of sub-images arranged according to numbers;
a rendering platform configured to create and render a plurality of texture units, each of the texture units created by the rendering platform corresponding to each of the sub-images one-to-one and having the same resolution;
the matching module is configured to set the size of an output frame and create plane units corresponding to the texture units one by one, the sum of the areas of all the plane units is equal to the area of the output frame, and the area of each plane unit is in direct proportion to the number of pixels of the corresponding texture unit;
the splicing module is configured to set the center of the output frame and obtain the position of each plane unit in the output frame according to the position of the sub-image corresponding to each texture unit in the original image; and
and the video memory is configured to store the texture units and a readable image corresponding to the original image, wherein the readable image is obtained by drawing the texture units corresponding to the plane units on each plane unit and rendering the texture units.
Optionally, the segmentation module includes:
a first analysis submodule configured to set a standard texture scale having a resolution w x h, a lateral resolution w, and a longitudinal resolution h, the standard texture scale not exceeding a maximum scale of a single texture supported by image rendering; and
the first calculation submodule is configured to calculate H/H and W/W respectively and carry out remainder rounding, the original image is divided into a plurality of sub-images distributed in m rows and n columns, the resolution of at least one sub-image is W x H, W and H are respectively the transverse resolution and the longitudinal resolution of the original image, and m and n are integers.
Optionally, the splicing module includes:
the second analysis submodule is configured to set the center of the output frame and set an image origin of the original image, wherein the image origin is located at the top left corner vertex position of the sub-images in the first row and the first column; and
and the second calculation submodule is configured to calculate the offset of the center point pixel of the sub-image corresponding to each texture unit from the original point of the image, obtain the offset of the center point pixel of the sub-image corresponding to each texture unit from the center point pixel of the original image, and keep the aspect ratio of the original image, so that the offset of the plane unit relative to the center of the output frame is the same as the offset proportion of the center point pixel of the sub-image corresponding to the texture unit in each group of corresponding plane unit and texture unit, thereby obtaining the position of each plane unit according to the offset relative to the center of the output frame.
Optionally, the apparatus includes at least one display module interface configured to connect to a display module configured to display each frame of the readable image in the output frame.
Optionally, the display module is a tiled screen, a VR display assembly, or a projection display assembly.
According to the high-resolution image processing method and the high-resolution image processing device provided by the invention, the readable image corresponding to the original image is obtained by segmenting, splicing and rendering the high-resolution original image, and the readable image obtained by processing is convenient to read and display from the frame buffer space by image display equipment or image processing software. The high-resolution image processing method and the high-resolution image processing device have the characteristic of self-adaptive parameter setting, and are favorable for fully exerting the performance of the image display equipment. Meanwhile, the resolution of the loadable original image is not limited in the high-resolution image processing method and the high-resolution image processing device, and the obtained high-resolution image can simultaneously ensure the observability of the whole image and the clear display of the local details of the image. The size and position of the readable image can be adjusted according to the characteristics of the display module connected to the display module interface, and the value of the readable image has a guiding function when the actual display module is deployed.
According to a fourth aspect of the present invention, there is provided a VR image display method, wherein each frame of original image of a VR image material is processed by the high resolution image processing method, and the obtained VR readable image is stored in a buffer and displayed by a VR terminal worn by a user.
Optionally, before processing each frame of original image of the VR image material by using the above-mentioned high resolution image processing method, the VR image display method further includes:
setting a real physical space for observing the VR image; and
and setting the distance between the VR image and the user and the size of the VR image according to the size of the real physical space and the requirement of the image definition.
Optionally, the DPI value of the VR image is not less than the DPI value of a display in the VR terminal.
Optionally, setting the resolution of the original image as W × H, W is a horizontal resolution, and H is a vertical resolution, so that the width size _ W and the height size _ H of the VR image satisfy the following relationship: size _ W/size _ H = W/H, and
Figure BDA0002125566080000061
wherein D is head Is a DPI value of a display in the VR device; when the width of the original image is greater than or equal to
Figure BDA0002125566080000062
Setting the VR imageHas a width of
Figure BDA0002125566080000063
When the original width of the original image is less than
Figure BDA0002125566080000064
Magnifying the original width to obtain a width of the VR image; the height of the VR image is adaptively adjusted according to width.
Optionally, in the real physical space, a distance h between the VR image and the user satisfies a relationship:
Figure BDA0002125566080000071
wherein the FOV h Horizontal field of view, FOV, of VR terminal worn by user v Vertical field angle, h, of VR terminal worn by user v And h h Viewing distances corresponding to boundaries of the horizontal and vertical field angles, respectively.
Optionally, when the depth of the real physical space is greater than or equal to h v And h h When the VR image is displayed in the real physical space, the width of the real physical space is not less than the width size _ W of the VR image; when the depth of the real physical space is less than h v And h h Is displayed outside the real physical space.
According to a fourth aspect of the present invention, there is provided a VR device comprising a VR terminal for wearing by a user, a VR image processor, and a memory configured to store executable instructions of the VR image processor, the executable instructions when executed by the VR image processor performing the VR image display method described above.
According to the VR image display method and the VR equipment, the high-resolution image processing method is adopted to display the high-resolution VR image material, the original image is divided, spliced and rendered, the obtained readable image is stored in a cache, reading and real-time updating are facilitated, and the image quality of VR display is improved. In addition, the high-resolution images are segmented, spliced and rendered on the rendering platform and displayed through the VR terminal, so that compared with a system for splicing a display card and a display, the system has the advantages that the complexity is simplified, the hardware cost is reduced, the display frame rate is increased, and the display effect is improved.
Drawings
Fig. 1 is a flowchart illustrating a high resolution image processing method according to an embodiment of the invention.
Fig. 2 is a schematic diagram of an original image segmented by the high resolution image processing method according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a high resolution image processing apparatus according to an embodiment of the present invention.
Fig. 4 is a schematic structural diagram of a high resolution image processing apparatus according to another embodiment of the present invention.
Fig. 5 is a schematic diagram of a real physical space and a VR image in a VR image display method according to an embodiment of the present invention.
Fig. 6 is a flowchart illustrating a VR image display method according to an embodiment of the present invention.
Description of reference numerals:
100. 200-high resolution image processing means; 110-a processor; 120-a memory; 130-a display card; 210-a memory; 220-a segmentation module; 230-a rendering platform; 240-matching module; 250-a splicing module; 260-video memory.
Detailed Description
The high resolution image processing method and apparatus, VR image display method, and VR device of the present invention are further described in detail with reference to the accompanying drawings and the embodiments. The advantages and features of the present invention will become more apparent from the following description. It is to be noted that the drawings are in a very simplified form and are not to precise scale, which is provided for the purpose of facilitating and clearly illustrating embodiments of the present invention. The terms "first," "second," and the like in the description are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in other sequences than described or illustrated herein. Similarly, if the method described herein comprises a series of steps, the order in which these steps are presented herein is not necessarily the only order in which these steps may be performed, and some of the described steps may be omitted and/or some other steps not described herein may be added to the method.
Fig. 1 is a flowchart illustrating a high resolution image processing method according to an embodiment of the invention. A high resolution image processing method in one embodiment of the present invention is described below with reference to fig. 1.
The high resolution image processing method includes a first step S1: image material is provided, the image material comprising at least one frame of a high resolution original image.
Specifically, as described in the background, when the resolution of an image is increased to a certain degree, problems in image processing and display are encountered, and thus "high resolution" in the embodiments mainly refers to image resolution beyond the reading capability of general image processing software or display devices, where the resolution of high-resolution images is higher than that of common full high-definition images (1920 × 1080), ultra-high-definition images (3840 × 2160) and 4K images (4096 × 2160), and can reach values of over billions and more than 10 billions, and the upper limit is not limited. As an example, in the embodiment, at least one of the lateral resolution and the longitudinal resolution of each frame of the high-resolution original image is 8000 or more. In another embodiment, the resolution of each frame of the original image is W × H, where W is a transverse resolution, H is a longitudinal resolution, W and H are both integers greater than or equal to 10000, that is, the total number of pixels W × H of the image is ≧ 10 8
The high resolution image processing method includes a second step S2: dividing an original image of a frame to be processed into a plurality of sub-images arranged according to numbers.
In an embodiment, the sub-images obtained after the segmentation may be imported into a rendering platform to create and draw texture resources, so as to facilitate processing by the rendering platform, and in step S2, the resolution of the segmented sub-images does not exceed the maximum size of a single texture supported by image rendering. Specifically, the resolution width of the maximum size of a single texture supported by a rendering platform may be denoted as w _ max, the resolution height may be denoted as h _ max, and the values of w _ max and h _ max may be obtained according to software parameters of the rendering platform, and in one rendering platform, both w _ max and h _ max are 8192. Under the condition that the maximum dimension limit of the texture is not exceeded, the texture dimension (defined as a standard texture dimension) specifically adopted in the embodiment when the high-resolution original image is segmented is dynamically adjustable, for example, the resolution width of the standard texture dimension is recorded as w, and the resolution height is recorded as h, wherein w belongs to [0, w _max ], and h belongs to [0, h _max ].
Specifically, the step of dividing the original image of one frame into a plurality of sub-images arranged by numbers may include: a first substep, setting a standard texture dimension, wherein the resolution of the standard texture dimension is w x h, w is the transverse resolution, and h is the longitudinal resolution, and the standard texture dimension does not exceed the maximum dimension of a single texture supported by image rendering; and a second sub-step of calculating H/H and W/W respectively, taking an integer part of the calculation result, and adding 1 to the integer part if the integer part exists, thereby dividing the original image into a plurality of sub-images distributed in m rows and n columns. Thus, in the plurality of sub-images, the resolution of at least one of the sub-images is W x H, W and H are the lateral and longitudinal resolutions of the original image, respectively, m and n are integers and at least one is greater than 1.
Fig. 2 is a schematic diagram of an original image segmented by the high resolution image processing method according to an embodiment of the present invention. Referring to fig. 2, when an original image of one frame is divided into sub-images of m rows and n columns, m and n satisfy m = H/H +1, n = W/W +1, where H/H and W/W represent integer division, and addition of 1 represents a remainder after division as one row (or column). The divided sub-images can be numbered in a row-column sequence from top to bottom and from left to right, the number of the sub-image in the ith row and the jth column can be marked as (i, j), and the resolution of the sub-image in the ith row and the jth column can be marked as f (i, j), wherein i belongs to {1,2,. Said, m }, and j belongs to {1,2,. Said, n }.
By using the method, sub-images with four resolutions can be obtained at most through segmentation: the resolution of the first sub-image is w x h, and the number range is i belonging to {1,2., m-1}, j belonging to {1,2., n-1}; the resolution of the second sub-image is (W- (n-1) × W) × h, numbered in the range i ∈ {1,2.,. M-1}, j = n; the third sub-image has a resolution w (H- (m-1) × H) with a number i = m, j ∈ {1,2,.., n-1}; the fourth sub-image has a resolution of (W- (n-1) × W) × (H- (m-1) × H) numbered i = m, j = n. As an example, taking W =100, h =100, W =9, h =8, then m =100/8+1=13, n =100/9+1=12, after division, 12 × 11 sub-images with resolution of 9*8, 12 × 1 sub-images with resolution of 8 (100-11 × 9), 1 sub-image with resolution of (100-12 × 8) < 9 > and 1*1 sub-images with resolution of (100-12) < 8 > (100-11 × 9) can be obtained.
The high resolution image processing method includes a third step S3: and creating a plurality of texture units, wherein each texture unit corresponds to each sub-image in a one-to-one mode and has the same resolution.
In an embodiment, m × n texture units (i.e. texture resources) may be created by the rendering platform, the number of texture units corresponding to the number of sub-images, i.e. sub-image (i, j) corresponds to texture unit (i, j). The resolution of each texture unit is set equal to the resolution f (i, j) of the same numbered sub-image. The created texture unit can be loaded into a video memory for storage.
The high resolution image processing method includes a fourth step S4: setting the size of an output frame, and creating plane units corresponding to the texture units one by one, wherein the sum of the areas of all the plane units is equal to the area of the output frame, and the area of each plane unit is in direct proportion to the number of pixels of the corresponding texture unit.
The width of an image to be output, i.e., an output frame, is denoted by size _ W, the height of the output frame is denoted by size _ H, size _ W (i, j) is the width of a plane unit corresponding to a texture unit in the jth column of the ith row, and size _ H (i, j) is the height of a plane unit corresponding to a texture unit in the jth column of the ith row (the width and height herein are not used to refer to the resolution width and height, but to the width and height of a physical measure), and then, the size of each plane unit can be calculated from the ratio of the horizontal and vertical resolutions of the corresponding texture unit (or sub-image) in the resolution of the original image and the size of the output frame. In a specific embodiment, the size of the output frame, the size of the plane unit, and the resolution of the texture unit corresponding to the plane unit satisfy the following two relations:
Figure BDA0002125566080000111
and
Figure BDA0002125566080000112
wherein, f (i, j) u For the lateral resolution of the texture unit corresponding to the sub-image in row i and column j, f (i, j) v The vertical resolution of the texture unit corresponding to the i-th row and j-th column sub-image is calculated to obtain the size of each plane unit under the condition that the resolution of the sub-image and the size of the output frame are known.
The position where the image is actually displayed and the characteristics of the display device are factors to be considered when setting the physical size of the output frame. For example, if a high resolution image is to be displayed by multiple tiled screens, the size of the output frame is related to the overall size of the tiled screen. For another example, if a high resolution image is to be displayed by the VR terminal, the size of the output frame may need to be adjusted according to the sharpness requirements of the VR image and the range of the physical space in which it is viewed. In addition, when the pixels of the divided sub-images change, the size of the corresponding plane unit should be updated accordingly.
The high resolution image processing method includes a fifth step S5: and setting the center of the output frame, and obtaining the position of each plane unit in the output frame according to the position of the sub-image corresponding to each texture unit in the original image.
Specifically, step S5 may include the following steps:
first, the center position of the output frame is set and recorded as O (x) 0 ,y 0 ,z 0 ). For convenience, the center of each planar unit may be assumed firstThe position of the original image coincides with the center position of the output frame, and is marked as P (i, j) = O, and an image origin pixel of the original image is set and is located at the top left corner vertex position of the sub-image in the first row and the first column;
then, the offset of the center point pixel of the sub-image corresponding to each texture unit from the image origin is calculated, for example, the offset of the center point pixel of the sub-image (i, j) from the image origin pixel of the original image is denoted as g (i, j), and the following relation is satisfied with reference to fig. 1 and 2,g (i, j) (the case of sub-images of four resolutions is shown, where H% H and W% W represent the corresponding values and take the remainder):
Figure BDA0002125566080000121
therefore, the offset of the central point pixel of the sub-image corresponding to each texture unit from the central point pixel of the original image is g (i, j) - (W/2,H/2);
then, the aspect ratio of the original image is maintained, so that in each group of corresponding plane units and texture units, the offset of the plane unit relative to the center of the output frame is the same as the offset proportion of the center point pixel of the sub-image corresponding to the texture unit, and the position of each plane unit is obtained according to the offset relative to the center of the output frame.
In order to maintain the aspect ratio of the original image, the pixel shift ratio should be the same as the physical shift ratio, and thus, it is possible to obtain
Figure BDA0002125566080000122
Figure BDA0002125566080000123
Where Offset (i, j) represents the amount of Offset of the plane element corresponding to the texture element (i, j) with respect to the center of the output frame, subscripts x and y respectively indicate the components in the width direction and the height direction in the coordinate system of the physical size of the image, and subscripts u and v respectively indicate the components in the width direction and the height direction in the pixel coordinate system.
After Offset (i, j) is obtained, the position of the plane unit corresponding to the texture unit (i, j) may be represented as P (i, j) = O-Offset (i, j). Thus, by calculating the offset of the center position of the plane element, the plane element position is updated to reproduce the relative positional relationship of the respective sub-images of the original image.
Referring to fig. 1, the high resolution image processing method includes a sixth step S6: and drawing the texture unit corresponding to the plane unit on each plane unit, and rendering to obtain a readable image corresponding to the original image. Specifically, the plane units and the texture units with the same number can be bound through the rendering platform, the corresponding texture unit is drawn on each plane unit, and the rendering effect of the spliced image is obtained through rendering.
In the above embodiment, the high-resolution original image is processed into the corresponding readable image through segmentation, splicing and rendering, and the readable image may be stored in a video memory or a frame buffer, so that the frame buffer is conveniently updated in real time and the readable image is output as the high-resolution image.
The high-resolution image processing method may be executed by a processor of a computer as a computer instruction or by using a plurality of functional blocks of the computer, and thus, a computer, an image processing block, or the like having a calculation and image processing function may be provided to complete the high-resolution image processing method and realize the same function.
Fig. 3 is a schematic structural diagram of a high resolution image processing apparatus according to an embodiment of the present invention. Referring to fig. 3, in an embodiment of the present invention, a high resolution image processing apparatus 100 is provided. The apparatus comprises a processor 110 and a memory 120, the memory 120 being configured to store executable instructions of the processor 110, which when executed by the processor 110, perform the above-mentioned high resolution image processing method.
In one embodiment, the storage 120 may be the internal memory of a computer, and the high resolution raw image material to be processed is also stored in the storage 120. The processor 110 may perform operations such as parameter calculation and rendering control according to executed computer instructions, and the high resolution image processing apparatus 100 may further include a display card 130, where the display card 130 is configured to store texture resources and a set of plane units and texture units (referred to as rendering units) after rendering is completed. With regard to details of performing the above-described high-resolution image processing method, reference may be made to the description of the embodiments described above with regard to the high-resolution image processing method.
Fig. 4 is a schematic structural diagram of a high resolution image processing apparatus according to another embodiment of the present invention. Referring to fig. 4, in another embodiment of the present invention, there is provided a high resolution image processing apparatus 200, the high resolution image processing apparatus 200 including the following components:
a memory 210, in which image materials are stored, and the image materials include at least one frame of original image with high resolution;
a dividing module 220, configured to divide a frame of original image to be processed into a plurality of sub-images arranged according to numbers, and import the sub-images into a rendering platform, where the resolution of each sub-image does not exceed the maximum size of a single texture supported by the rendering platform;
a rendering platform 230 configured to create and render a plurality of texture units, each of the texture units created by the rendering platform corresponding to each of the sub-images in a one-to-one manner and having the same resolution;
a matching module 240 configured to set a size of an output frame and create plane units corresponding to the texture units one to one, wherein a sum of areas of all the plane units is equal to an area of the output frame, and an area of each plane unit is proportional to a pixel number of the corresponding texture unit;
a stitching module 250 configured to set a center of the output frame, and obtain a position of each plane unit in the output frame according to a position of a sub-image corresponding to each texture unit in the original image;
a video memory 260 configured to store the plurality of texture units and a readable image corresponding to the original image, wherein the readable image is obtained by drawing the texture unit corresponding to the plane unit on each plane unit and performing rendering.
Further, the segmentation module 120 may comprise a first analysis submodule configured to set a standard texture scale having a resolution W × H, W is a horizontal resolution, H is a vertical resolution, the standard texture scale does not exceed a maximum dimension of a single texture supported by image rendering, and a first computation submodule 122 configured to compute H/H and W/W, respectively, take an integer part of a computation result, add 1 to the integer part if the fractional part is included, and segment the original image into a plurality of sub-images distributed in m rows and n columns, where at least one of the sub-images has a resolution W × H, W and H are horizontal and vertical resolutions of the original image, respectively, and m and n are integers.
The stitching module 150 may include a second analysis sub-module and a second calculation sub-module, the second analysis sub-module is configured to set a center of the output frame and set an image origin of the original image, the image origin is located at a top left vertex position of the sub-images in the first row and the first column, the second calculation sub-module is configured to calculate a shift of a center pixel of the sub-image corresponding to each texture unit from the image origin and obtain a shift of the center pixel of the sub-image corresponding to each texture unit from the center pixel of the original image, and the second analysis sub-module is further configured to maintain an aspect ratio of the original image, so that in each set of corresponding plane unit and texture unit, a shift amount of the plane unit with respect to the center of the output frame is the same as a shift ratio of the center pixel of the sub-image corresponding to the texture unit, thereby obtaining a position of each plane unit according to the shift amount with respect to the center of the output frame.
The rendering platform is used for creating and drawing texture units, and can create textures for the imported original images and render according to set parameters in the supported texture scale range. The rendering platform may employ rendering tools disclosed in the art, such as Unity, unreal, and the like.
It can be seen that the memory, the video memory, the rendering platform, and the modules of the high-resolution image processing apparatus 200 correspond to the steps of the high-resolution image processing method in the foregoing embodiment, and can obtain the readable image corresponding to the high-resolution original image. Thus, with respect to each constituent part of the high-resolution image processing apparatus 200, it can be understood with reference to the description of the high-resolution image processing method of the foregoing embodiment. It should be noted that the above-described device embodiments are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts described as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution.
The high resolution image processing apparatus in the above different embodiments may each include at least one display module interface configured to connect to a display module configured to display the readable image in each frame in the output frame. The display module may be a part of the high resolution image processing apparatus, or may be a different component of the same display device, for example, the display module may be a tiled screen, a VR display component, or a projection display component, etc. which outputs and displays a readable image of the buffer space. The high-resolution image processing apparatus or the display device may be a high-resolution projection display system, a high-resolution tiled display apparatus, a large-scale live-action game apparatus, an ultra-high-resolution geographic information system, a computer aided design system, a VR (Virtual Reality) display system, an AR (Augmented Reality) display system, an MR (Mix Reality) display system, or the like, and since a high-resolution readable image can be obtained, the high-resolution image processing apparatus may be an integral part or a part of various image processing systems or display devices.
The high-resolution image processing method and the high-resolution image processing device have the characteristic of self-adaptive parameter setting, and are favorable for fully exerting the performance of the display equipment. Meanwhile, the resolution of the loadable original image is not limited in the high-resolution image processing method and the high-resolution image processing device, and the obtained high-resolution readable image can simultaneously ensure the observability of the whole image and the clear display of the local details of the image. The size and position of the readable image can be adjusted according to the characteristics of the display module connected to the display module interface, and the value of the readable image has a guiding function when the actual display module is deployed.
According to the above description, the high resolution image processing method and apparatus in the above embodiments can be used for VR display. In an embodiment of the present invention, the VR image display method includes processing each frame of original image of a VR image material by using the high resolution image processing method, storing an obtained readable image in a cache, and displaying the readable image through a VR terminal worn by a user. Therefore, by adopting the high-resolution image processing method, the VR image display method stores the readable image with high resolution in the buffer space, and the image is very convenient to read and display due to the segmentation, splicing and rendering processing, thereby being beneficial to increasing the display frame rate and improving the display effect.
Because virtual display is to be realized to the VR image, the VR terminal that the user wore is device such as VR helmet, VR glasses, and the in-process of watching, the user immerses in the scene that the VR built, can change position and viewing direction according to the scene. Therefore, the position of the virtual image generally needs to take into account the size of the real physical space in which the VR image is observed, and if the distance between the virtual image and the user exceeds the depth (or length) of the real physical space, a corresponding prompt needs to be set when the user walks, so as to prevent the user from walking to the boundary of the real physical space and affecting the VR experience.
Therefore, in the VR image display method according to an embodiment, before each frame of original image of the VR image material is processed by the high resolution image processing method, a real physical space for observing the VR image is further set, and a distance between the VR image and the user and a size of the VR image are set according to a size of the real physical space and a requirement of image definition.
Fig. 5 is a schematic diagram of a real physical space and a VR image in a VR image display method according to an embodiment of the invention. Fig. 6 is a flowchart illustrating a VR image display method according to an embodiment of the present invention. A VR image display method according to an embodiment of the present invention is described below with reference to fig. 5 and 6.
As shown in fig. 5, in the VR image display method, a user uses a VR terminal such as a VR headset to observe a VR image with a resolution W × H in a real physical space with a depth (or length), a width, and a height X, Y, and Z, respectively.
Setting the initial observation position of the user in the real physical space as the zero point of the length direction, and setting the observation direction of the user as the length direction, so that the VR image is displayed on the screen which is at a certain distance from the user in the length direction of the real physical space. The invention is not limited to this, and the position and distance of the VR image can be changed during the viewing process, so that the position of the user can be updated and the distance between the VR image and the user can be updated before the original image of the VR image material is selected for processing. The following description will be given by taking the case where the VR image shown in fig. 5 is located in the longitudinal direction as an example.
In order to fully utilize the definition of the display of the VR terminal and ensure the definition of the viewed VR image, it is preferable that the DPI value of the screen displaying the VR image (abbreviated as VR screen) is not less than the DPI value of the display of the VR terminal. Specifically, the resolution of an original image in a VR image material is set to W × H, W is the horizontal resolution, H is the vertical resolution, and dpi of a VR image screen is set to D screen The dpi of the display (such as a helmet display, and a silicon-based micro display can be adopted) in the VR terminal is D head It is necessary to make D screen ≥D head . Let the height of the image screen be size _ W and the width be size _ H, then:
Figure BDA0002125566080000181
in order to ensure that no distortion occurs in the VR image, size _ W/size _ H = W/H needs to be satisfied. By using D screen ≥D head The physical height and width of the VR image can be obtained to satisfy a first limiting condition:
Figure BDA0002125566080000182
in addition, in order to observe and clarify details of the VR image material displayed on the VR screen, it is generally necessary to enlarge the VR image material by a certain magnification, and the physical width of the original image is represented as size _ W 0 Height is noted as size _ H 0 Magnification is denoted as z 0 Generally z 0 Is set to be greater than 1. If the original size of the collected VR image material is very large, the maximum value of the first limiting condition of the VR image is reached or even exceeded, i.e. the original size of the collected VR image material is very large
Figure BDA0002125566080000183
The size of the VR screen can be set according to the maximum value of the limited range of the VR image. Conversely, if the original width of the original image is small, it is preferable to enlarge, that is, to make the height and width of the VR image satisfy the second constraint condition when the height and width of the VR image satisfy the first constraint condition: size _ W = z 0 ×size_W 0 And size _ H = z 0 ×size_H 0 Wherein z is 0 Any floating point number greater than 1 is possible.
Specifically, when the physical size of the VR image is calculated, since the aspect ratio of the image is constant, the height may be adaptively adjusted according to the condition that the aspect ratio is not changed after the width of the VR image is obtained.
In order to ensure that the entire overview of the screen image can be observed in a specific space, it is necessary to ensure that the distance between the screen and the user satisfies a certain limit. Recording the horizontal field angle of the VR helmet as FOV h Vertical field of view is denoted FOV v Distance of user from VR ScreenIf the distance is h, the following distance constraint condition (h) needs to be satisfied v And h h Viewing distances corresponding to boundaries of the horizontal and vertical field angles, respectively):
Figure BDA0002125566080000191
Figure BDA0002125566080000192
since the range of motion in real physical space is limited, and the depth (length) is X, according to the above analysis, if max (h) v ,h h ) = X, the above distance constraint can be satisfied simply by placing the screen of the VR image on the corresponding boundary location of the real physical space (e.g., the wall of the room). If max (h) v ,h h ) If < X, the distance constraint condition can be satisfied by putting the screen of the VR image in the wall, and if max (h) v ,h h ) X, as shown in fig. 5, the user in the real physical space cannot satisfy the above distance constraint condition, and the screen of the VR image needs to be placed at a certain distance outside the wall, i.e. Δ h = max (h) v ,h h ) X, in which case the VR terminal preferably sets a flag or alert at the boundary of the real physical space to alert the user when the user moves towards the VR image.
In addition, in order to observe details of the left and right edges of the screen when the screen is large, the width of the VR image is set not to exceed the width of the real physical space, namely, the size _ W is less than or equal to Y, and the height of the VR image is adaptively adjusted according to the condition that the aspect ratio is constant.
Through the process, the distance between the VR image and the user and the size of the VR image are determined according to the size of the real physical space and the requirement of the image definition. And then, setting a display frame rate, processing each frame of original image of the VR image material by adopting the high-resolution image processing method, splicing and rendering the VR image, obtaining a readable VR image, updating frame buffer in real time, and displaying on a VR helmet.
In an embodiment of the present invention, the VR device further includes a VR terminal worn by a user, a VR image processor, and a VR memory, where the VR memory is configured to store executable instructions of the VR image processor, and when the executable instructions are executed by the VR image processor, the VR image display method is performed. The above-mentioned components of the VR device may be separate components, for example, the VR terminal may be a part of or an entirety of a VR headset or VR glasses, and the VR image processor and the VR memory are disposed on a computer. But not limited thereto, the VR device may be a VR all-in-one machine, and the VR terminal, the VR image processor, and the VR memory each perform a part of the functions of the all-in-one machine.
In the VR image display method and the VR device in the above embodiments, in order to display a high-resolution VR image material, the high-resolution image processing method is adopted, an original image is segmented, spliced, and rendered, and an obtained readable image is stored in a cache, so that reading and real-time updating are facilitated, and improvement of image quality of VR display is facilitated. In addition, the high-resolution images are segmented, spliced and rendered on the rendering platform and displayed through the VR terminal, so that compared with a system for splicing a display card and a display, the system has the advantages that the complexity is simplified, the hardware cost is reduced, the display frame rate is increased, and the display effect is improved.
The above description is only for the purpose of describing the preferred embodiments of the present invention and is not intended to limit the scope of the claims of the present invention, and any person skilled in the art can make possible the variations and modifications of the technical solutions of the present invention using the methods and technical contents disclosed above without departing from the spirit and scope of the present invention, and therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical essence of the present invention belong to the protection scope of the technical solutions of the present invention.

Claims (19)

1. A high resolution image processing method, comprising:
providing image materials, wherein the image materials comprise at least one frame of high-resolution original image;
dividing a frame of original image to be processed into a plurality of sub-images arranged according to numbers;
creating a plurality of texture units, wherein each texture unit corresponds to each sub-image one by one and has the same resolution;
setting the size of an output frame, and creating plane units corresponding to the texture units one by one, wherein the sum of the areas of all the plane units is equal to the area of the output frame, and the area of each plane unit is in direct proportion to the number of pixels of the corresponding texture unit;
setting the center of the output frame, and obtaining the position of each plane unit in the output frame according to the position of the sub-image corresponding to each texture unit in the original image; and
and drawing the texture unit corresponding to the plane unit on each plane unit, and rendering to obtain a readable image corresponding to the original image.
2. The method according to claim 1, wherein the resolution of each frame of the original image is W x H, W is a lateral resolution, H is a longitudinal resolution, and at least one of W and H is greater than 8000.
3. The high resolution image processing method according to claim 2, wherein the method of dividing the original image of one frame to be processed into a plurality of sub-images arranged by numbers comprises:
setting a standard texture scale, wherein the resolution of the standard texture scale is w x h, w is the transverse resolution, h is the longitudinal resolution, and the standard texture scale does not exceed the maximum size of a single texture supported by image rendering; and
respectively calculating H/H and W/W, taking an integer part of a calculation result, and adding 1 on the basis of the integer part if a decimal part exists, thereby dividing the original image into a plurality of sub-images distributed in m rows and n columns, wherein the resolution of at least one sub-image is W H, and m and n are integers.
4. The high resolution image processing method according to claim 3, wherein the size of the output frame, the size of the planar unit, and the resolution of the texture unit corresponding to the planar unit satisfy the following two equalities:
Figure FDA0002125566070000021
and
Figure FDA0002125566070000022
wherein size _ W is the width of the output frame, size _ H is the height of the output frame, f (i, j) u For the lateral resolution of the texture unit corresponding to the sub-image in row i and column j, f (i, j) v For the vertical resolution of the texture unit corresponding to the sub-image in the ith row and jth column, size _ w (i, j) is the width of the plane unit corresponding to the sub-image in the ith row and jth column, size _ h (i, j) is the height of the plane unit corresponding to the sub-image in the ith row and jth column, i ∈ {1,2.,. M }, j ∈ {1,2.,. N }.
5. The method for processing the high resolution image according to claim 1, wherein the method for obtaining the position of each plane unit in the output frame according to the position of the sub-image corresponding to each texture unit in the original image comprises:
setting an image origin pixel of the original image, wherein the image origin pixel is positioned at the top left corner vertex position of the sub-images in the first row and the first column;
calculating the offset of the center point pixel of the sub-image corresponding to each texture unit from the original point pixel of the image, and obtaining the offset of the center point pixel of the sub-image corresponding to each texture unit from the center point pixel of the original image; and
and keeping the aspect ratio of the original image, so that the offset of the plane unit relative to the center of the output frame in each group of corresponding plane units and texture units is the same as the offset proportion of the center point pixel of the sub-image corresponding to the texture unit, thereby obtaining the position of each plane unit according to the offset relative to the center of the output frame.
6. The method of claim 1, wherein the output frame is adjustable in size and the standard texture scale is adjustable in resolution.
7. A high resolution image processing apparatus comprising a processor and a memory configured to store executable instructions of the processor, which when executed by the processor, perform the high resolution image processing method of any of claims 1 to 6.
8. A high-resolution image processing apparatus characterized by comprising:
the image processing system comprises an internal memory, a processing unit and a display unit, wherein image materials are stored in the internal memory and comprise at least one frame of high-resolution original image;
the device comprises a segmentation module, a processing module and a processing module, wherein the segmentation module is configured to segment a frame of original image to be processed into a plurality of sub-images arranged according to numbers;
a rendering platform configured to create and render a plurality of texture units, each of the texture units created by the rendering platform corresponding to each of the sub-images in a one-to-one manner and having the same resolution;
the matching module is configured to set the size of an output frame and create plane units corresponding to the texture units one by one, the sum of the areas of all the plane units is equal to the area of the output frame, and the area of each plane unit is in direct proportion to the number of pixels of the corresponding texture unit;
the splicing module is configured to set the center of the output frame and obtain the position of each plane unit in the output frame according to the position of the sub-image corresponding to each texture unit in the original image; and
and the video memory is configured to store the texture units and a readable image corresponding to the original image, wherein the readable image is obtained by drawing the texture units corresponding to the plane units on each plane unit and rendering the texture units.
9. The high resolution image processing apparatus according to claim 8, wherein the segmentation module includes:
a first analysis submodule configured to set a standard texture scale having a resolution w x h, a lateral resolution w, and a longitudinal resolution h, the standard texture scale not exceeding a maximum scale of a single texture supported by image rendering; and
a first calculation submodule configured to calculate H/H and W/W, respectively, take an integer part of the calculation result, add 1 on the basis of the integer part if a decimal part exists, and thereby divide the original image into a plurality of sub-images distributed in m rows and n columns, wherein the resolution of at least one of the sub-images is W × H, W and H are the horizontal and vertical resolutions of the original image, respectively, and m and n are integers.
10. The high resolution image processing apparatus according to claim 8, wherein the stitching module includes:
the second analysis submodule is configured to set the center of the output frame and set an image origin of the original image, and the image origin is located at the top left vertex position of the sub-images in the first row and the first column; and
and the second calculation submodule is configured to calculate the offset of the center point pixel of the sub-image corresponding to each texture unit from the original point of the image, obtain the offset of the center point pixel of the sub-image corresponding to each texture unit from the center point pixel of the original image, and maintain the aspect ratio of the original image, so that the offset of the plane unit relative to the center of the output frame is the same as the offset proportion of the center point pixel of the sub-image corresponding to the texture unit in each group of the corresponding plane unit and texture unit, thereby obtaining the position of each plane unit according to the offset relative to the center of the output frame.
11. The apparatus according to claim 8, wherein said apparatus comprises at least one display module interface configured to connect to a display module configured to display each frame of said readable image in said output frame.
12. The high resolution image processing apparatus according to claim 11, wherein the display module is a tiled screen, a VR display assembly, or a projection display assembly.
13. A VR image display method, characterized in that each frame of original image of VR image material is processed by the high-resolution image processing method according to any one of claims 1 to 6, and the obtained VR readable image is stored in a buffer and displayed by a VR terminal worn by a user.
14. The VR image display method of claim 13, wherein prior to processing each frame of raw image of the VR image material, the VR image display method further comprises:
setting a real physical space for observing the VR image; and
and setting the distance between the VR image and the user and the size of the VR image according to the size of the real physical space and the requirement of the image definition.
15. The VR image display method of claim 14, wherein a DPI value of the VR image is not less than a DPI value of a display in the VR terminal.
16. The VR image display method of claim 15, wherein setting the resolution of the original image as W x H, W as a horizontal resolution, and H as a vertical resolution, the width size _ W and the height size _ H of the VR image satisfy the following relationship: size _ W/size _ H = W/H, and
Figure FDA0002125566070000051
wherein D is head Is a DPI value of a display in the VR device; when the width of the original image is greater than or equal to
Figure FDA0002125566070000052
Setting the width of the VR image to
Figure FDA0002125566070000053
When the original width of the original image is less than
Figure FDA0002125566070000054
Magnifying the original width to obtain a width of the VR image; the height of the VR image is adaptively adjusted according to width.
17. The VR image displaying method of claim 14, wherein a distance h between the VR image and a user in the real physical space satisfies a relationship:
Figure FDA0002125566070000055
wherein the FOV h Horizontal field of view, FOV, of VR terminal worn by user v Vertical field angle, h, of VR terminal worn by user v And h h Viewing distances corresponding to boundaries of the horizontal and vertical field angles, respectively.
18. The VR image displaying method of claim 17, wherein when a depth of the real physical space is h or more v And h h When the VR image is displayed in the real physical space, the width of the real physical space is not less than the width size _ W of the VR image; when the depth of the real physical space is less than h v And h h Is displayed outside the real physical space.
19. A VR device comprising a VR terminal for wearing by a user, a VR image processor, and a memory configured to store executable instructions of the VR image processor, which when executed by the VR image processor, perform the VR image display method of any one of claims 13 to 18.
CN201910621305.7A 2019-07-10 2019-07-10 High-resolution image processing method and device, VR image display method and VR equipment Active CN110322424B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910621305.7A CN110322424B (en) 2019-07-10 2019-07-10 High-resolution image processing method and device, VR image display method and VR equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910621305.7A CN110322424B (en) 2019-07-10 2019-07-10 High-resolution image processing method and device, VR image display method and VR equipment

Publications (2)

Publication Number Publication Date
CN110322424A CN110322424A (en) 2019-10-11
CN110322424B true CN110322424B (en) 2022-12-09

Family

ID=68123155

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910621305.7A Active CN110322424B (en) 2019-07-10 2019-07-10 High-resolution image processing method and device, VR image display method and VR equipment

Country Status (1)

Country Link
CN (1) CN110322424B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112306339B (en) * 2020-09-01 2022-08-12 北京沃东天骏信息技术有限公司 Method and apparatus for displaying image
CN113015007B (en) * 2021-01-28 2023-05-26 维沃移动通信有限公司 Video frame inserting method and device and electronic equipment
CN114049857B (en) * 2021-10-21 2024-02-06 江苏印象乾图文化科技有限公司 Naked eye 3D electronic relic display stand displayed by Yu Wenbo

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103730097A (en) * 2013-12-27 2014-04-16 广东威创视讯科技股份有限公司 Method and system for displaying ultrahigh resolution images
CN103995684A (en) * 2014-05-07 2014-08-20 广东粤铁瀚阳科技有限公司 Method and system for synchronously processing and displaying mass images under ultrahigh resolution platform
WO2015153167A1 (en) * 2014-04-05 2015-10-08 Sony Computer Entertainment America Llc Varying effective resolution by screen location by altering rasterization parameters

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10713752B2 (en) * 2017-06-09 2020-07-14 Sony Interactive Entertainment Inc. Temporal supersampling for foveated rendering systems

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103730097A (en) * 2013-12-27 2014-04-16 广东威创视讯科技股份有限公司 Method and system for displaying ultrahigh resolution images
WO2015153167A1 (en) * 2014-04-05 2015-10-08 Sony Computer Entertainment America Llc Varying effective resolution by screen location by altering rasterization parameters
CN103995684A (en) * 2014-05-07 2014-08-20 广东粤铁瀚阳科技有限公司 Method and system for synchronously processing and displaying mass images under ultrahigh resolution platform

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
3维电子地图的高分辨率输出;汪荣峰等;《测绘科学技术学报》;20110815(第04期);全文 *

Also Published As

Publication number Publication date
CN110322424A (en) 2019-10-11

Similar Documents

Publication Publication Date Title
CN110322424B (en) High-resolution image processing method and device, VR image display method and VR equipment
US7570280B2 (en) Image providing method and device
CA2995665C (en) Image generating apparatus and image display control apparatus for a panoramic image
TWI542190B (en) Method and system for encoding a 3d image signal, encoded 3d image signal, method and system for decoding a 3d image signal
KR102121389B1 (en) Glassless 3d display apparatus and contorl method thereof
JP2009526488A5 (en)
CN111290581A (en) Virtual reality display method, display device and computer readable medium
CN115842907A (en) Rendering method, computer product and display device
CN108076208B (en) Display processing method and device and terminal
JP6708444B2 (en) Image processing apparatus and image processing method
CN110290285A (en) Image processing method, image processing apparatus, image processing system and medium
EP2858359A1 (en) Unpacking method, unpacking device and unpacking system of packed frame
CN103021295B (en) High-definition autostereoscopic display
JP7365185B2 (en) Image data transmission method, content processing device, head mounted display, relay device, and content processing system
KR101679122B1 (en) Method, device and system for packing and unpacking color frame and original depth frame
KR20170096801A (en) Method and apparatus for processing image in virtual reality system
JP6768431B2 (en) Image generator and program
CN105721816B (en) Display data processing method and display device
CN112929641B (en) 3D image display method and 3D display device
CN113568700B (en) Display picture adjusting method and device, computer equipment and storage medium
Uyen et al. Subjective evaluation of the 360-degree projection formats using absolute category rating
CN114860063B (en) Picture display method and device of head-mounted display device, electronic device and medium
JP6283297B2 (en) Method, apparatus and system for resizing and restoring original depth frame
CN113554659B (en) Image processing method, device, electronic equipment, storage medium and display system
CN112017138B (en) Image splicing method based on scene three-dimensional structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: A14-5, 13th Floor, Building A, Building J1, Phase II, Innovation Industrial Park, No. 2800, Chuangxin Avenue, High-tech Zone, Hefei City, Anhui Province, 230088

Applicant after: Anhui aiguan Vision Technology Co.,Ltd.

Address before: Room 305, Block E, 492 Anhua Road, Changning District, Shanghai 200050

Applicant before: SHANGHAI EYEVOLUTION TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant