CN110992250A - Method and device for realizing high-resolution display - Google Patents

Method and device for realizing high-resolution display Download PDF

Info

Publication number
CN110992250A
CN110992250A CN201911200521.0A CN201911200521A CN110992250A CN 110992250 A CN110992250 A CN 110992250A CN 201911200521 A CN201911200521 A CN 201911200521A CN 110992250 A CN110992250 A CN 110992250A
Authority
CN
China
Prior art keywords
image
viewer
stretching
input image
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911200521.0A
Other languages
Chinese (zh)
Inventor
耿立华
马希通
李咸珍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN201911200521.0A priority Critical patent/CN110992250A/en
Publication of CN110992250A publication Critical patent/CN110992250A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

Disclosed herein is a method of implementing a high resolution display, comprising: dividing a region concerned by a viewer and a region not concerned by the viewer on an input image, and dividing the input image by regions according to a region division result; carrying out image stretching on the attention area part of the viewer of the input image by utilizing a first stretching algorithm to obtain a first stretched image, and carrying out image stretching on the non-attention area part of the viewer of the input image by utilizing a second stretching algorithm to obtain a second stretched image; the image stretching quality of the first stretching algorithm is higher than that of the second stretching algorithm; and splicing the first stretching image and the second stretching image to obtain an output image. The technical scheme can realize that the low-resolution film source image is displayed on the high-resolution display, and the image display quality and the consumption of computing resources are considered.

Description

Method and device for realizing high-resolution display
Technical Field
The invention relates to the technical field of display, in particular to a method and a device for realizing high-resolution display.
Background
At present, the requirement of viewers on image quality is higher and higher, and limited by the bandwidth of data transmission, a film source can be low-resolution image data, and then the image data is scaled in a driving circuit of a display device and then is changed into a high-resolution image for output display.
Nowadays, the display technology of ultra-high resolution (4K, 8K) is more and more mature, and ultra-high resolution display is more and more popular, but many of the current film sources are still film sources of lower resolution (2K, 4K), so when these film sources are connected to the ultra-high resolution display through the video interface, it is necessary to stretch the image inside the display first and then display it, for example, 2K is stretched to 4K, 4K is stretched to 8K.
The type of the algorithm used in the image stretching may affect the quality (such as jaggy, definition, sharpness, and the like) of the image after the image stretching, a high-quality image stretching algorithm may obtain a better effect after the stretching, and a simple algorithm may obtain a general effect after the stretching, but the high-quality algorithm has a higher complexity, and may consume more computing resources and power consumption in the implementation.
Disclosure of Invention
The embodiment of the invention provides a method and a device for realizing high-resolution display, which can realize the display of a low-resolution film source image on a high-resolution display and also consider the image display quality and the consumption of computing resources.
According to a first aspect of the present application, an embodiment of the present invention provides a method for implementing high resolution display, including:
dividing a region concerned by a viewer and a region not concerned by the viewer on an input image, and dividing the input image by regions according to a region division result;
carrying out image stretching on the attention area part of the viewer of the input image by utilizing a first stretching algorithm to obtain a first stretched image, and carrying out image stretching on the non-attention area part of the viewer of the input image by utilizing a second stretching algorithm to obtain a second stretched image; the image stretching quality of the first stretching algorithm is higher than that of the second stretching algorithm;
and splicing the first stretching image and the second stretching image to obtain an output image.
According to a second aspect of the present application, an embodiment of the present invention provides an apparatus for implementing high-resolution display, including:
the region dividing and segmenting module is used for dividing a region concerned by a viewer and a region not concerned by the viewer on the input image and segmenting the input image according to regions according to a region dividing result;
the image stretching module is used for carrying out image stretching on the attention area part of the viewer of the input image by utilizing a first stretching algorithm to obtain a first stretched image, and carrying out image stretching on the non-attention area part of the viewer of the input image by utilizing a second stretching algorithm to obtain a second stretched image; the image stretching quality of the first stretching algorithm is higher than that of the second stretching algorithm;
and the image splicing module is used for splicing the first stretching image and the second stretching image to obtain an output image.
Compared with the prior art, the method and the device for realizing high-resolution display provided by the embodiment of the invention have the advantages that the attention area and the non-attention area of the viewer are divided on the input image, and the input image is divided according to the area division result; carrying out image stretching on the attention area part of the viewer of the input image by utilizing a first stretching algorithm to obtain a first stretched image, and carrying out image stretching on the non-attention area part of the viewer of the input image by utilizing a second stretching algorithm to obtain a second stretched image; the image stretching quality of the first stretching algorithm is higher than that of the second stretching algorithm; and splicing the first stretching image and the second stretching image to obtain an output image. According to the technical scheme of the embodiment of the invention, the fact that the area watched by the viewer is limited is considered, so that the stretching algorithm with high image stretching quality (more occupied computing resources) is used in the area concerned by the viewer, and the stretching algorithm with low image stretching quality (less occupied computing resources) is used in the area not concerned by the viewer, so that the low-resolution film source image is displayed on the high-resolution display, and the image display quality and the computing resource consumption are considered.
Drawings
Fig. 1 is a flowchart of a method for implementing high resolution display according to embodiment 1 of the present invention;
fig. 2 is a schematic diagram illustrating a division of a viewer attention area and a viewer non-attention area in embodiment 1 of the present invention;
fig. 3 is a schematic diagram of edge filling of an image region in embodiment 1 of the present invention;
FIG. 4 is a schematic diagram of image region stretching, stitching and smooth filtering in embodiment 1 of the present invention;
fig. 5 is a schematic diagram of an apparatus for implementing high resolution display according to embodiment 2 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. It should be noted that the embodiments and features of the embodiments in the present application may be arbitrarily combined with each other without conflict.
The technical scheme of the embodiment of the invention considers that the watching area of the viewer is limited, so that the stretching algorithm with high image stretching quality (more occupied computing resources) is used in the attention area of the viewer, and the stretching algorithm with low image stretching quality (less occupied computing resources) is used in the area which is not concerned by the viewer, thereby realizing high-resolution display, and simultaneously reducing the computing resources occupied by the algorithm and reducing the power consumption as much as possible.
Example 1
As shown in fig. 1, an embodiment of the present invention provides a method for implementing high resolution display, including:
step S110, dividing a region concerned by a viewer and a region not concerned by the viewer on an input image, and dividing the input image by regions according to a region division result;
step S120, carrying out image stretching on the attention area part of the viewer of the input image by utilizing a first stretching algorithm to obtain a first stretched image, and carrying out image stretching on the non-attention area part of the viewer of the input image by utilizing a second stretching algorithm to obtain a second stretched image; the image stretching quality of the first stretching algorithm is higher than that of the second stretching algorithm;
and step S130, splicing the first stretching image and the second stretching image to obtain an output image.
In the above embodiment, the viewer attention area and the viewer non-attention area are divided on the input image, the input image is divided by areas according to the area division result, the viewer attention area portion of the input image is subjected to image stretching by a first stretching algorithm with high image stretching quality to obtain a first stretched image, and the viewer non-attention area portion of the input image is subjected to image stretching by a second stretching algorithm with low image stretching quality to obtain a second stretched image; and splicing the first stretching image and the second stretching image to obtain an output image. In general, algorithms with high image stretching quality consume more computing resources and power consumption, and the image processing mode uses a first stretching algorithm (occupies more computing resources) in a region concerned by a viewer and uses a second stretching algorithm (occupies less computing resources) in a region not concerned by the viewer, so that the computing resources occupied by the algorithms and the power consumption are reduced as much as possible while high-resolution display is realized.
In step S110, in an exemplary embodiment, the dividing the viewer attention region and the viewer non-attention region on the input image, and dividing the input image by regions according to the region division result includes:
determining a focus center of an input image according to a viewing focus position on a screen, expanding the focus center to two sides according to an expansion ratio corresponding to a viewing distance, and dividing an expanded image area into a viewer focus area; wherein the expanded image area does not exceed the boundary of the input image;
dividing the remaining image on the input image without the attention area of the viewer into one or two non-attention areas of the viewer; wherein each viewer non-attention area is a connected area.
For example, when there are remaining regions on the input image on both left and right sides of the viewer attention region, two viewer non-attention regions may be divided, one on the left side of the viewer attention region and the other on the right side of the viewer attention region. When there is a remaining region only on one side of the viewer attention region on the input image, only one viewer non-attention region may be divided.
The position of the watching focus on the screen can be obtained by installing a binocular camera on a panel of the display screen, or by installing the binocular camera on the panel of the display screen and an infrared light source coaxially arranged with the binocular camera, the depth information of the portrait shot by the binocular camera is extracted, the sight direction of a viewer is determined by utilizing the existing sight tracking technology, and the position of the watching focus of the viewer falling on the screen can be calculated according to the sight direction of the viewer and the watching distance of the viewer from the screen. Wherein, two mesh cameras and infrared light source can install the centre department at the display screen top edge.
The position where the viewing focus falls on the screen when the viewer is looking straight ahead may be set as the center position of the screen. When the viewer rotates the eyeball to the left (or right), the position where the viewing focus of the viewer falls on the screen is shifted to the left (or right) of the center position. When the viewer rotates the eyeball upward (or downward), the position where the viewing focus of the viewer falls on the screen is shifted to the upper side (or lower side) of the center position. After the rotation direction of eyeballs is determined according to the existing sight tracking technology, the direction and the distance of the position of a watching focus of a viewer on a screen deviating from the center position of the screen can be determined by combining the watching distance of the viewer from the screen, which is obtained by a binocular camera.
The viewing distance of the viewer can be determined by extracting the depth information of the portrait shot by the binocular camera. The distance can be determined by the parallax of images acquired by the two cameras of the binocular camera, the smaller the parallax, the farther the distance, and the larger the parallax, the closer the distance.
The existing sight tracking technology mainly researches the acquisition, modeling and simulation of eyeball movement information. The equipment for acquiring the eyeball motion information comprises an infrared light source and image acquisition equipment or image acquisition equipment. When the infrared light source and the image acquisition equipment are adopted for sight tracking, light beams such as infrared rays and the like are actively projected to eyes (irises) through the infrared light source, the eyes (irises) reflect the infrared rays, the camera acquires image information, then, an image analysis algorithm extracts reflected light spots, and eye rotation information is extracted from the change of the reflected light spots. The infrared projection method has an advantage in accuracy, being accurate to within 1 cm on a 30 inch screen. Gaze tracking can also be achieved with software support using only image acquisition devices, such as cameras. Existing gaze tracking technologies generally include: the method comprises the steps of image acquisition, image preprocessing, sight parameter detection, pupil tracking, sight tracking system calibration, sight direction calculation and the like.
In step S110, in an exemplary embodiment, the determining a focus center of the input image according to the viewing focus position on the screen includes:
determining the position proportion of a viewing focus position on a screen relative to the whole width of the screen, and determining the position of the focus center of an input image in the width direction of the input image according to the position proportion; wherein a relative position of the viewing focus position in a width direction of a screen is the same as a relative position of the focus center in the width direction of the input image; or
Determining the position proportion of a viewing focus position on a screen relative to the whole height of the screen, and determining the position of a focus center of an input image in the height direction of the input image according to the position proportion; wherein a relative position of the viewing focus position in a height direction of a screen is the same as a relative position of the focus center in the height direction of the input image;
for example, when the position ratio of the viewing focus position on the screen with respect to the entire width of the screen is 1/3, the relative position of the center of attention in the width direction of the input image is also 1/3 of the entire width. Alternatively, when the positional ratio of the viewing focus position on the screen with respect to the entire height of the screen is 1/3, the relative position of the center of attention in the height direction of the input image is also 1/3 of the entire height.
In step S110, in an exemplary embodiment, the expansion ratio corresponding to the viewing distance is a percentage R% of a width of an expansion area to an entire width of the input image, or a percentage R% of a height of an expansion area to an entire height of the input image;
the R% can be determined using the following formula:
R%=a*(1/2N)*l*100% (1);
Figure BDA0002295767240000061
wherein a is an expansion coefficient, a is more than 0 and less than or equal to 1, N is the maximum value of the effective viewing distance, l is the effective viewing distance from a viewer to the display screen, l is more than 0 and less than or equal to N, and s is the viewing distance when the viewer views the display screen;
wherein, the closer the viewing distance of the viewer, the smaller the region concerned, so the smaller the region needing to be expanded; the farther the viewing distance of the viewer, the larger the area of interest, and the larger the area that needs to be expanded.
For example, when a is equal to 1 and the actual viewing distance of the viewer is N, the expansion ratio is 50%. As shown in fig. 2, assuming that the center of interest of the input image is located at about 1/4 in the width direction of the input image, when the region is expanded, the region is expanded by 50% of the width of the entire input image from the center of interest to the left and right sides, respectively. As shown in fig. 2, the area a is a viewer attention area, and since the area a has reached the left boundary of the input image, the remaining area is an area B, which is divided into viewer non-attention areas.
In step S110, in an exemplary embodiment, after dividing the input image by regions according to the region division result, the method further includes:
when the non-attention area of the viewer is positioned at the left side and the right side of the attention area of the viewer, for any one divided image area, at the division edge of the image area, adding one or more rows of pixels in the image area adjacent to the division edge to the image area to generate an edge-supplemented image area; or
When a non-attention area of a viewer is positioned at the upper side and the lower side of the attention area of the viewer, adding one or more rows of pixels in an image area adjacent to a segmentation edge of any one segmented image area to the segmentation edge of the image area to generate an edge-supplemented image area;
as shown in fig. 3, when the viewer non-attention regions (B1 and B2) are located on both left and right sides of the viewer attention region (a), one or more columns of pixels adjacent to the region B1 in the region a are added to the region B1, generating a first viewer non-attention region (B1') after being bordered; adding one or more columns of pixels adjacent to the area a in the area B1 to the area a, and adding one or more columns of pixels adjacent to the area a in the area B2 to the area a to generate a viewer attention area (a') after edge filling; adding one or more columns of pixels in region a adjacent to region B2 to region B2, generating a second viewer non-attention region (B2') after edge filling;
in step S120, in an exemplary embodiment, the image stretching the viewer attention area portion of the input image by using a first stretching algorithm to obtain a first stretched image, and the image stretching the viewer non-attention area portion of the input image by using a second stretching algorithm to obtain a second stretched image, includes:
stretching the attention area part of a viewer of the input image in equal proportion by using a first stretching algorithm; stretching the non-attention area part of the viewer of the input image in equal proportion by using a second stretching algorithm; the equal proportion stretching means that the stretching ratio in the width direction is the same as that in the height direction; or
As shown in fig. 4, stretching the viewer attention area part of the input image after edge repairing in equal proportion by using a first stretching algorithm; stretching the non-attention area part of the viewer of the input image after edge mending in an equal proportion by utilizing a second stretching algorithm; the equal proportion stretching means that the stretching ratio in the width direction is the same as that in the height direction;
in step S120, the first stretching algorithm, such as gradient stretching and BQbek algorithm, is complex, generally uses more computing resources, consumes higher power consumption, but the stretched image has higher quality, which means that the stretched image is clearer, the lines are smoother, and the jaggies are less. The second stretching algorithm, such as a bilinear stretching algorithm, is simpler, generally uses less computing resources, consumes less power, and has lower stretched image quality.
Step S130, in an exemplary embodiment, after the first stretched image and the second stretched image are spliced to obtain an output image, the method further includes:
carrying out smooth filtering processing on the output image;
as shown in fig. 4, the main purposes of the smoothing filtering process are: the edges of the image splicing positions obtained by using different stretching algorithms are smoother, and the splitting generated during the segmentation of the input image is eliminated.
Example 2
As shown in fig. 5, an embodiment of the present invention provides an apparatus for implementing high resolution display, including:
a region division and segmentation module 10 for dividing a viewer attention region and a viewer non-attention region on an input image, and segmenting the input image by regions according to a region division result;
an image stretching module 20, configured to perform image stretching on the viewer attention area portion of the input image by using a first stretching algorithm to obtain a first stretched image, and perform image stretching on the viewer non-attention area portion of the input image by using a second stretching algorithm to obtain a second stretched image; the image stretching quality of the first stretching algorithm is higher than that of the second stretching algorithm;
and the image stitching module 30 is configured to stitch the first stretched image and the second stretched image to obtain an output image.
In an exemplary embodiment, the region dividing and dividing module is configured to divide a viewer attention region and a viewer non-attention region on an input image, and divide the input image by regions according to a region dividing result, by: determining a focus center of an input image according to a viewing focus position on a screen, expanding the focus center to two sides according to an expansion ratio corresponding to a viewing distance, and dividing an expanded image area into a viewer focus area; wherein the expanded image area does not exceed the boundary of the input image; dividing the remaining image on the input image without the attention area of the viewer into one or two non-attention areas of the viewer; wherein each viewer non-attention area is a connected area.
In an exemplary embodiment, the region dividing and dividing module is configured to determine a center of interest of the input image from a viewing focus position on the screen in the following manner: determining the position proportion of a viewing focus position on a screen relative to the whole width of the screen, and determining the position of the focus center of an input image in the width direction of the input image according to the position proportion; wherein a relative position of the viewing focus position in a width direction of a screen is the same as a relative position of the focus center in the width direction of the input image; or determining the position proportion of the viewing focus position on the screen relative to the whole height of the screen, and determining the position of the focus center of the input image in the height direction of the input image according to the position proportion; wherein a relative position of the viewing focus position in a height direction of a screen is the same as a relative position of the focus center in the height direction of the input image.
In an exemplary embodiment, the expansion ratio corresponding to the viewing distance is a percentage R% of a width of an expansion region to an entire width of the input image, or a percentage R% of a height of an expansion region to an entire height of the input image.
In an exemplary embodiment, the R% is determined using the following formula:
R%=a*(1/2N)*l*100% (1);
Figure BDA0002295767240000091
wherein a is an expansion coefficient, a is more than 0 and less than or equal to 1, N is the maximum value of the effective viewing distance, l is the effective viewing distance from a viewer to the display screen, l is more than 0 and less than or equal to N, and s is the viewing distance when the viewer views the display screen.
In an exemplary embodiment, the apparatus further comprises: an image edging module 40;
the image edge supplementing module is used for adding one or more rows of pixels in the image area adjacent to the segmentation edge of any one segmented image area to generate an edge-supplemented image area in the image area when the non-attention area of the viewer is positioned at the left side and the right side of the attention area of the viewer; or when the non-attention area of the viewer is positioned at the upper side and the lower side of the attention area of the viewer, for any one divided image area, at the division edge of the image area, adding one or more rows of pixels in the image area adjacent to the division edge to the image area to generate the edge-supplemented image area.
In an exemplary embodiment, the image stretching module is configured to perform image stretching on a viewer attention area portion of the input image by using a first stretching algorithm to obtain a first stretched image, and perform image stretching on a viewer non-attention area portion of the input image by using a second stretching algorithm to obtain a second stretched image, in the following manner: stretching the attention area part of the viewer of the input image after edge mending in an equal proportion by utilizing a first stretching algorithm; stretching the non-attention area part of the viewer of the input image after edge mending in an equal proportion by utilizing a second stretching algorithm; the equal ratio stretching means that the stretching ratio in the width direction is the same as the stretching ratio in the height direction.
In an exemplary embodiment, the apparatus further comprises: an image smoothing and filtering module 50;
and the image smoothing and filtering module is used for carrying out smoothing filtering processing on the output image.
It should be noted that the present invention can be embodied in other specific forms, and various changes and modifications can be made by those skilled in the art without departing from the spirit and scope of the invention.

Claims (10)

1. A method of implementing a high resolution display, comprising:
dividing a region concerned by a viewer and a region not concerned by the viewer on an input image, and dividing the input image by regions according to a region division result;
carrying out image stretching on the attention area part of the viewer of the input image by utilizing a first stretching algorithm to obtain a first stretched image, and carrying out image stretching on the non-attention area part of the viewer of the input image by utilizing a second stretching algorithm to obtain a second stretched image; the image stretching quality of the first stretching algorithm is higher than that of the second stretching algorithm;
and splicing the first stretching image and the second stretching image to obtain an output image.
2. The method of claim 1, wherein:
the dividing of the viewer attention area and the viewer non-attention area on the input image, the dividing of the input image by areas according to the area division result, includes:
determining a focus center of an input image according to a viewing focus position on a screen, expanding the focus center to two sides according to an expansion ratio corresponding to a viewing distance, and dividing an expanded image area into a viewer focus area; wherein the expanded image area does not exceed the boundary of the input image;
dividing the remaining image on the input image without the attention area of the viewer into one or two non-attention areas of the viewer; wherein each viewer non-attention area is a connected area.
3. The method of claim 2, wherein:
the determining a center of attention of an input image according to a viewing focus position on a screen includes:
determining the position proportion of a viewing focus position on a screen relative to the whole width of the screen, and determining the position of the focus center of an input image in the width direction of the input image according to the position proportion; wherein a relative position of the viewing focus position in a width direction of a screen is the same as a relative position of the focus center in the width direction of the input image; or
Determining the position proportion of a viewing focus position on a screen relative to the whole height of the screen, and determining the position of a focus center of an input image in the height direction of the input image according to the position proportion; wherein a relative position of the viewing focus position in a height direction of a screen is the same as a relative position of the focus center in the height direction of the input image.
4. The method of claim 3, wherein:
the expansion ratio corresponding to the viewing distance is a percentage R% of a width of an expansion region to an entire width of the input image, or a percentage R% of a height of an expansion region to an entire height of the input image.
5. The method of claim 4, wherein:
the R% is determined using the following formula:
R%=a*(1/2N)*l*100% (1);
Figure FDA0002295767230000021
wherein a is an expansion coefficient, a is more than 0 and less than or equal to 1, N is the maximum value of the effective viewing distance, l is the effective viewing distance from a viewer to the display screen, l is more than 0 and less than or equal to N, and s is the viewing distance when the viewer views the display screen.
6. The method of claim 1, wherein:
after the input image is segmented by regions according to the region segmentation result, the method further includes:
when the non-attention area of the viewer is positioned at the left side and the right side of the attention area of the viewer, for any one divided image area, at the division edge of the image area, adding one or more rows of pixels in the image area adjacent to the division edge to the image area to generate an edge-supplemented image area; or
When the non-attention area of the viewer is positioned at the upper side and the lower side of the attention area of the viewer, for any one divided image area, at the dividing edge of the image area, one or more rows of pixels in the image area adjacent to the dividing edge are added to the image area to generate an edge-supplemented image area.
7. The method of claim 6, wherein:
the method comprises the following steps of carrying out image stretching on a region part focused by a viewer of an input image by utilizing a first stretching algorithm to obtain a first stretched image, and carrying out image stretching on a region part not focused by the viewer of the input image by utilizing a second stretching algorithm to obtain a second stretched image, wherein the method comprises the following steps:
stretching the attention area part of the viewer of the input image after edge mending in an equal proportion by utilizing a first stretching algorithm; stretching the non-attention area part of the viewer of the input image after edge mending in an equal proportion by utilizing a second stretching algorithm; the equal ratio stretching means that the stretching ratio in the width direction is the same as the stretching ratio in the height direction.
8. The method of claim 1, wherein:
after the first stretched image and the second stretched image are spliced to obtain an output image, the method further comprises:
and performing smooth filtering processing on the output image.
9. An apparatus for implementing a high resolution display, comprising:
the region dividing and segmenting module is used for dividing a region concerned by a viewer and a region not concerned by the viewer on the input image and segmenting the input image according to regions according to a region dividing result;
the image stretching module is used for carrying out image stretching on the attention area part of the viewer of the input image by utilizing a first stretching algorithm to obtain a first stretched image, and carrying out image stretching on the non-attention area part of the viewer of the input image by utilizing a second stretching algorithm to obtain a second stretched image; the image stretching quality of the first stretching algorithm is higher than that of the second stretching algorithm;
and the image splicing module is used for splicing the first stretching image and the second stretching image to obtain an output image.
10. The apparatus of claim 9, wherein:
the region dividing and dividing module is used for dividing a region concerned by a viewer and a region not concerned by the viewer on the input image in the following mode, and dividing the input image according to regions according to a region dividing result: determining a focus center of an input image according to a viewing focus position on a screen, expanding the focus center to two sides according to an expansion ratio corresponding to a viewing distance, and dividing an expanded image area into a viewer focus area; wherein the expanded image area does not exceed the boundary of the input image; dividing the remaining image on the input image without the attention area of the viewer into one or two non-attention areas of the viewer; wherein each viewer non-attention area is a connected area.
CN201911200521.0A 2019-11-29 2019-11-29 Method and device for realizing high-resolution display Pending CN110992250A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911200521.0A CN110992250A (en) 2019-11-29 2019-11-29 Method and device for realizing high-resolution display

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911200521.0A CN110992250A (en) 2019-11-29 2019-11-29 Method and device for realizing high-resolution display

Publications (1)

Publication Number Publication Date
CN110992250A true CN110992250A (en) 2020-04-10

Family

ID=70088287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911200521.0A Pending CN110992250A (en) 2019-11-29 2019-11-29 Method and device for realizing high-resolution display

Country Status (1)

Country Link
CN (1) CN110992250A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102318352A (en) * 2009-02-17 2012-01-11 皇家飞利浦电子股份有限公司 Combination 3D rendering and graph data
CN103974115A (en) * 2014-04-23 2014-08-06 京东方科技集团股份有限公司 High-resolution display method and system
CN105027144A (en) * 2013-02-27 2015-11-04 汤姆逊许可公司 Method and device for calibration-free gaze estimation
CN106415445A (en) * 2014-06-06 2017-02-15 英特尔公司 Technologies for viewer attention area estimation
CN106531073A (en) * 2017-01-03 2017-03-22 京东方科技集团股份有限公司 Processing circuit of display screen, display method and display device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102318352A (en) * 2009-02-17 2012-01-11 皇家飞利浦电子股份有限公司 Combination 3D rendering and graph data
CN105027144A (en) * 2013-02-27 2015-11-04 汤姆逊许可公司 Method and device for calibration-free gaze estimation
CN103974115A (en) * 2014-04-23 2014-08-06 京东方科技集团股份有限公司 High-resolution display method and system
CN106415445A (en) * 2014-06-06 2017-02-15 英特尔公司 Technologies for viewer attention area estimation
CN106531073A (en) * 2017-01-03 2017-03-22 京东方科技集团股份有限公司 Processing circuit of display screen, display method and display device

Similar Documents

Publication Publication Date Title
CN106652972B (en) Processing circuit of display screen, display method and display device
KR101956149B1 (en) Efficient Determination of Optical Flow Between Images
CN102741879B (en) Method for generating depth maps from monocular images and systems using the same
CN102859675B (en) Semiconductor fault analysis device and fault analysis method
CN109741289B (en) Image fusion method and VR equipment
EP2348745A2 (en) Perceptually-based compensation of unintended light pollution of images for display systems
US10942567B2 (en) Gaze point compensation method and apparatus in display device, and display device
WO2015161603A1 (en) High-resolution display method and system
CN109509146A (en) Image split-joint method and device, storage medium
KR100560464B1 (en) Multi-view display system with viewpoint adaptation
US20130127989A1 (en) Conversion of 2-Dimensional Image Data into 3-Dimensional Image Data
US20230394833A1 (en) Method, system and computer readable media for object detection coverage estimation
DE102019215387A1 (en) CIRCULAR FISH EYE CAMERA ARRAY CORRECTION
CN105657268A (en) Multi-viewpoint video splicing and fusion algorithm based on multiple resolutions
WO1996005573A1 (en) Image-processing system for handling depth information
Bokov et al. Automatic detection of artifacts in converted S3D video
Avraham et al. Ultrawide foveated video extrapolation
US11641455B2 (en) Method and apparatus for measuring dynamic crosstalk
CN104184936A (en) Image focusing processing method and system based on light field camera
CN107093395B (en) Transparent display device and image display method thereof
CN105530505B (en) 3-D view conversion method and device
Liu et al. Stereo-based bokeh effects for photography
CN110992250A (en) Method and device for realizing high-resolution display
CN108830804B (en) Virtual-real fusion fuzzy consistency processing method based on line spread function standard deviation
EP3467637B1 (en) Method, apparatus and system for displaying image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination