CN112598572B - Method and device for screening subblock images and processing units - Google Patents

Method and device for screening subblock images and processing units Download PDF

Info

Publication number
CN112598572B
CN112598572B CN202010019369.2A CN202010019369A CN112598572B CN 112598572 B CN112598572 B CN 112598572B CN 202010019369 A CN202010019369 A CN 202010019369A CN 112598572 B CN112598572 B CN 112598572B
Authority
CN
China
Prior art keywords
image
sub
block
new
processing unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010019369.2A
Other languages
Chinese (zh)
Other versions
CN112598572A (en
Inventor
虞露
王彬
王楚楚
孙宇乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Publication of CN112598572A publication Critical patent/CN112598572A/en
Application granted granted Critical
Publication of CN112598572B publication Critical patent/CN112598572B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images

Abstract

The invention discloses a method and a device for screening subblock images and processing units. The invention is used in the multimedia field, decoding and extracting the subblock images or the processing units from the code stream of the multi-view subblock splicing image, screening the subblock images or the processing units, judging whether the subblock images or the processing units are overlapped with the target image or not by extracting the related information of the subblock images or the processing units and the related information of the target image, if so, rendering by using the pixels in the subblock images or the processing units to obtain part of the target image, otherwise, not rendering. The method provided by the invention can effectively reduce the rendering calculation time without reducing the quality of the rendered target image. The invention provides a method for screening subblock images and processing units and a corresponding device.

Description

Method and device for screening subblock images and processing units
Technical Field
The invention belongs to the field of sub-block image processing, and particularly relates to a method and a device for screening sub-block images and processing units before target image rendering of a multi-viewpoint sub-block spliced image.
Background
The "immersion" is a subjective evaluation, which refers to the perception substitution of the viewer into the virtual scene created and displayed by the multimedia system. As the capabilities of capture devices and display devices have increased year by year, the encoding, transmission and rendering of immersive media as a visual multimedia that can bring a viewer a strong sense of immersion has become a research hotspot in the industry and the scientific community.
As immersive media support an increase in the degree of freedom of viewing, the visual immersion that it brings to the viewer is significantly enhanced. In three-dimensional space, the viewer's viewing freedom supports up to 6 degrees of freedom, including translation along the X, Y, Z axes of the three-dimensional space and rotation about the three axes, respectively. At present, a viewer can watch a scene by moving the position and changing the orientation arbitrarily in a limited space (with limited translation freedom), so that the interaction inductance and the motion parallax are obtained, and a stronger visual immersion feeling is formed.
To support viewing a scene in 6 degrees of freedom in a defined space, immersive media requires rendering of target content at any position, at any orientation, in the defined space. The multi-view image plus depth information is an effective immersive media expression mode, and consists of texture images of a plurality of views and depth images corresponding to the texture images. By using a viewpoint synthesis technology based on a depth image, the expression mode can be used for rendering to obtain an image of a target viewpoint according to the camera parameters of the target image and the position relation between the target viewpoint and the existing viewpoint. However, since there is generally a large information redundancy between multiple views, it is costly to encode and decode all multi-view source images.
The multi-view sub-block stitched image effectively solves the above problems. Before encoding and decoding, redundant information of other viewpoints is removed as much as possible by using a main viewpoint (some images containing complete viewpoint information in multi-viewpoint images) image through analyzing a geometric texture relation among a plurality of viewpoints, so that other viewpoint images except the main viewpoint only retain specific effective information. In consideration of coding efficiency, the preservation of sub-picture effective information is generally represented by a rectangular region, so that a plurality of rectangular sub-block images are formed, and finally the plurality of sub-block images are spliced into a multi-view sub-block spliced image, as shown in fig. 1. After the operation, the image data amount required to be coded and transmitted can be greatly reduced.
And for the decoding end, all the sub-block images are extracted from the multi-view sub-block spliced image obtained by decoding by using the sub-block image information obtained by decoding. Wherein, the sub-block image information at least comprises: the width and the height of the sub-block images, the positions of the upper left pixels of the sub-block images in the multi-view stitched image, and the positions of the upper left pixels of the sub-block images in the source view image.
The target image composition is calculated by projecting the sub-block images. And for each sub-block image, rendering the target image by using the camera parameter of the single-viewpoint image to which the sub-block image belongs and the camera parameter relation of the target image, wherein the camera parameters in the camera parameters comprise a focal length, principal point coordinates and coordinate axis inclination parameters, and the parameters are contained in an internal parameter matrix of the following formula. The inter-viewpoint positional relationship of corresponding pixels between any two viewpoints V1, V2 is shown by the following formula:
Figure BDA0002360157730000011
wherein u istarget,vtargetIs the coordinate position of the pixel point in the target viewpoint V1,
uref,vrefis the coordinate position of the pixel point in the reference viewpoint V2,
Figure BDA0002360157730000012
is the camera intrinsic parameter matrix of the target viewpoint V1,
Figure BDA0002360157730000013
is the camera intra-reference matrix of reference viewpoint V2,
r and t represent a rotational-translational relationship of the camera coordinates at the reference viewpoint V2 and the camera coordinates at the target viewpoint V1,
zrefare the depth values corresponding to the pixel points in the reference viewpoint V2,
ztargetare the depth values corresponding to the pixel points in the target viewpoint V1,
and finally, performing fusion processing on all the sub-block texture information projected to the target image to synthesize the target image. Compared with the transmission of a complete number of multi-viewpoint source images, the synthesis quality of the target image can be obviously improved under the same code rate.
In an actual image processing system, the processing time at the decoding end is a key parameter for judging whether the system is feasible or not. However, the target image synthesis based on the sub-block images still has a space for improving the optimization. All the sub-block images in the multi-view sub-block stitched image jointly describe the whole scene information. However, under the constraint that the viewing angle range of a viewer is limited, the content of all viewing angle scenes of each frame of viewed target images is only covered, so that a certain sub-block image has no information contribution to the synthesized target image, as shown in fig. 2, the content of the sub-block image 2 is rendered and has no area coverage with the target image, and the pixel-by-pixel projection calculation of the sub-block image which does not contribute to the target view point increases invalid calculation burden.
Disclosure of Invention
In order to solve the above technical problem, the present invention provides a method and an apparatus for screening subblock images and processing units. Before each sub-block image or processing unit carries out target image rendering, a pre-screening process is added, under the premise that the viewing position of a target viewpoint is not far away from the existing viewpoint and the deviation distance and the deviation angle of the target viewpoint are not large, the viewing angle range occupied by the size of the sub-block image or the size of the processing unit is not large, the horizontal vertical direction is generally smaller than 90 degrees, and the target image is an image with a non-panoramic and limited viewing field range, a certain number of representative points and depth ranges in the sub-block image are used for describing the scene range contained in the sub-block image or the processing unit, whether the sub-block image or the processing unit is used for image rendering is judged according to the condition that the representative points fall on the target image, and only the effective sub-block image or the processing unit is subjected to pre-rendering. Therefore, the number of sub-block images or processing units for image rendering is saved, and the rendering calculation time is reduced while the synthesis quality of the target image is not changed.
The first purpose of the invention is to provide a method for screening subblock images, which comprises the following steps:
for at least one sub-block image in the multi-viewpoint sub-block spliced image, acquiring width information width and height information height of the sub-block image, position information of the sub-block image in the multi-viewpoint sub-block spliced image, position information of the sub-block image in a source viewpoint image and camera parameters of the source viewpoint image to which the sub-block image belongs from a code stream, wherein the camera parameters comprise camera orientation, camera position coordinates and camera internal parameters;
acquiring width information width _ o, height information height _ o and camera parameters of a target image;
obtaining two depth parameters of the sub-block image: z _ near _ new and z _ far _ new, wherein the relationship between the two parameter values satisfies that z _ near _ new is less than or equal to z _ far _ new;
combining four boundary vertexes of the sub-block image with depth information to obtain a set of space representative points (x) by using the camera parameters of the source viewpoint image and the camera parameters of the target imagei,yi,zi) Projecting to obtain N representative points in the target image; wherein N is the number of space representative points, and i is an integer from 0 to N-1;
according to the position coordinates (xo) of N representative points in the obtained target imagei,yoi) Whether the sub-block image and the target image have region overlapping is judged in advance; and if the areas are overlapped, rendering by using pixels in the sub-block images to obtain partial target images, and otherwise, rendering by not using the sub-block images.
Further, if any one of the following conditions is met, the overlapping of the sub-block image and the target image in the area is predetermined:
(1) the N representative points projected to the target image are all positioned on the left side of the left boundary of the target image;
(2) the N representative points projected to the target image are all positioned on the right side of the right boundary of the target image;
(3) the N representative points projected to the target image are all positioned on the upper side of the upper boundary of the target image;
(4) the N representative points projected to the target image are all below the lower boundary of the target image.
Further, N is 8.
Further, the method for determining the two depth parameters z _ near _ new and z _ far _ new of the sub-block images is one of the following methods:
(1) decoding the code stream to obtain a nearest depth value z _ near of the source view image and a farthest depth value z _ far of the source view image, wherein z _ near _ new of the sub-block image is equal to z _ near, and z _ far _ new of the sub-block image is equal to z _ far;
(2) the z _ near _ new of the sub-block image is equal to the nearest depth value of the sub-block image, and the z _ far _ new of the sub-block image is equal to the farthest depth value of the sub-block image.
Further, the method for determining the nearest depth value of the sub-block image and the farthest depth value of the sub-block image is one of the following methods:
(1) directly decoding from the code stream to obtain the nearest depth value and the farthest depth value of the subblock image;
(2) in decoding the reconstructed depth image, the depth value of the pixel which is closest to the source viewpoint to which the sub-block belongs in all the pixels of the sub-block image is the closest depth value of the sub-block; and the depth value of the pixel which is farthest away from the source viewpoint of the sub-block in all the pixels of the sub-block image is the farthest depth value of the sub-block.
It is a second object of the invention to provide a method of screening processing units comprising:
for at least one processing unit in the multi-viewpoint sub-block spliced image, calculating the width information width and height information height of the processing unit according to the width W and height H of the information transmission unit corresponding to the processing unit in the code stream, wherein the method comprises the following steps:
width=min{w0,W-Δw};
height=min{h0,H-Δh};
wherein, { w0,h0{ Δ w, Δ h } are the position offsets of the processing unit with respect to the information transmission unit, respectively;
acquiring the position information of the information transmission unit in the multi-viewpoint sub-block spliced image and the position information of the information transmission unit in the source viewpoint image from the code stream;
calculating the position information of the processing unit in the multi-viewpoint sub-block stitched image and the position information of the information transmission unit in the source viewpoint image according to the position information of the information transmission unit in the multi-viewpoint sub-block stitched image, the position information of the information transmission unit in the source viewpoint image and the offset { delta w, delta h } of the position of the processing unit relative to the position of the information transmission unit;
acquiring camera parameters of a source viewpoint image to which the processing unit belongs, wherein the camera parameters comprise camera orientation, camera position coordinates and camera internal parameters;
acquiring width information width _ o, height information height _ o and camera parameters of a target image;
two depth parameters of the processing unit are obtained: z _ near _ new and z _ far _ new, wherein z _ near _ new is less than or equal to z _ far _ new;
combining four boundary vertexes of the sub-block image with depth information to obtain a set of space representative points (x) by using the camera parameters of the source viewpoint image and the camera parameters of the target imagei,yi,zi) Projecting to obtain N representative points in the target image; wherein N is the number of space representative points, and i is an integer from 0 to N-1;
according to the position coordinates (xo) of N representative points in the obtained target imagei,yoi) Judging whether the processing unit and the target image have area overlapping or not in advance;
and if the areas are overlapped, rendering by using pixels in the processing unit to obtain a part of target image, otherwise, rendering without using the processing unit.
Further, if any one of the following conditions is met, the overlapping of the sub-block image and the target image in the area is predetermined:
(1) the N representative points projected to the target image are all positioned on the left side of the left boundary of the target image;
(2) the N representative points projected to the target image are all positioned on the right side of the right boundary of the target image;
(3) the N representative points projected to the target image are all positioned on the upper side of the upper boundary of the target image;
(4) the N representative points projected to the target image are all below the lower boundary of the target image.
Further, N is 8.
Further, the method for determining the two depth parameters z _ near _ new and z _ far _ new of the sub-block images is one of the following methods:
(1) decoding the code stream to obtain a nearest depth value z _ near of the source view image and a farthest depth value z _ far of the source view image, wherein z _ near _ new of the sub-block image is equal to z _ near, and z _ far _ new of the sub-block image is equal to z _ far;
(2) the z _ near _ new of the sub-block image is equal to the nearest depth value of the sub-block image, and the z _ far _ new of the sub-block image is equal to the farthest depth value of the sub-block image.
Further, the method for determining the nearest depth value of the sub-block image and the farthest depth value of the sub-block image is one of the following methods:
(1) directly decoding from the code stream to obtain the nearest depth value and the farthest depth value of the subblock image;
(2) in decoding the reconstructed depth image, the depth value of the pixel which is closest to the source viewpoint to which the sub-block belongs in all the pixels of the sub-block image is the closest depth value of the sub-block; and the depth value of the pixel which is farthest away from the source viewpoint of the sub-block in all the pixels of the sub-block image is the farthest depth value of the sub-block.
A third object of the present invention is to provide an apparatus for filtering subblock images, comprising:
the subblock image information extraction module inputs a multi-view subblock spliced image code stream and outputs at least one subblock image information, and the information comprises: the width information width and the height information height of the sub-block image, the position information of the sub-block image in the multi-viewpoint sub-block spliced image, the position information of the sub-block image in the source viewpoint image and the camera parameters of the source viewpoint image to which the sub-block image belongs, wherein the camera parameters comprise the camera orientation, the camera position coordinates and the camera internal parameters;
the target image related information acquisition module is used for acquiring width information width _ o, height information height _ o and camera parameters of the target image;
the subblock image depth parameter acquiring module is used for acquiring two depth parameters z _ near _ new and z _ far _ new, wherein the z _ near _ new is less than or equal to z _ far _ new;
a subblock image judging module, configured to utilize the camera parameters of the source viewpoint image and the camera parameters of the target image to obtain N spatial representative points (x) by combining four boundary vertices of a subblock image with depth informationi,yi,zi) Projecting to obtain N representative points in the target image; wherein N is the number of space representative points, and i is an integer from 0 to N-1; according to the position coordinates (xo) of N representative points in the obtained target imagei,yoi) Whether the sub-block image and the target image have region overlapping is judged in advance; and if the areas are overlapped, rendering by using pixels in the sub-block images to obtain partial target images, and otherwise, rendering by not using the sub-block images.
It is a fourth object of the present invention to provide an apparatus for screening processing units, comprising:
the information extraction module of the information transmission unit inputs the code stream of the multi-viewpoint sub-block spliced image and outputs the code stream of the multi-viewpoint sub-block spliced image, the width W and the height H of the information transmission unit, the position information of the information transmission unit in the multi-viewpoint sub-block spliced image and the position information of the information transmission unit in the source viewpoint image;
a processing unit information extraction module, input as default width and height of the processing unit { w0,h0Outputting the position offset { Δ w, Δ h } of the processing unit relative to the information transmission unit and the information transmission unit information as processing unit information, including: width and height of the processing unit, position information of the processing unit in the multi-view sub-block stitched image, position information of the processing unit in the source view image and camera parameters of the source view image to which the processing unit belongs, wherein the camera parameters comprise camera orientation, camera position coordinates and camera internal parameters, and the processing unit information extraction module transmits unit information, { w0,h0Calculating processing unit information according to the following calculation methods:
width=min{w0,W-Δw};
height=min{h0,H-Δh};
the target image related information acquisition module is used for acquiring width information width _ o, height information height _ o and camera parameters of the target image;
the processing unit depth parameter acquisition module is used for acquiring two depth parameters z _ near _ new and z _ far _ new, wherein the z _ near _ new is less than or equal to z _ far _ new;
a subblock image judging module, configured to utilize the camera parameters of the source viewpoint image and the camera parameters of the target image to obtain N spatial representative points (x) by combining four boundary vertices of a subblock image with depth informationi,yi,zi) Projecting to obtain N representative points in the target image; wherein N is the number of space representative points, and i is an integer from 0 to N-1; according to the position coordinates (xo) of N representative points in the obtained target imagei,yoi) Whether the sub-block image and the target image have region overlapping is judged in advance; and if the areas are overlapped, rendering by using pixels in the sub-block images to obtain partial target images, and otherwise, rendering by not using the sub-block images.
Due to the adoption of the technical scheme, the invention has the following advantages:
in the target panoramic image, the effective view angle area only occupies a small part of the target panoramic image. The subblock images in an invalid view angle area are removed through judging a small number of representative points of the subblock images, and only the effective subblock images are pre-rendered, so that the number of the subblock images for image rendering can be reduced, and the rendering calculation time can be reduced while the synthesis quality of the target image is not changed. Particularly, the invention selects four boundary vertexes of the subblock image as representative points of the subblock image, combines two depth parameters of the depth parameters, and can ensure that eight spatial representative points are larger than or equal to the spatial range which can be expressed by the original subblock image, so that the rapid pre-screening can ensure that the number of the subblock images which are not in an effective view angle domain is reduced, and the synthesis quality of a target image is not changed.
Drawings
Other features and advantages of the present invention will become apparent from the following description of the preferred embodiment, taken in conjunction with the accompanying drawings, which illustrate, by way of example, the principles of the invention.
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a schematic diagram of generation of a multi-view sub-block stitched image.
Fig. 2 is a schematic diagram of coverage of a sub-block image in a multi-view sub-block stitched image with a target image area after the sub-block image is mapped to a three-dimensional space.
FIG. 3 is a schematic diagram of eight representative points in space obtained by combining four boundary vertices of the sub-block image and the four vertices of z _ near _ new and z _ far _ new.
FIG. 4 is a flow chart of an embodiment of the apparatus of the present invention.
Fig. 5 is a schematic diagram of the relationship among the multi-view sub-block stitched image, the information transmission unit, and the processing unit.
Fig. 6 is a schematic diagram of default division into a plurality of sub-images when the multi-view sub-block stitched image is a main view.
Detailed Description
For a further understanding of the invention, reference will now be made to the following examples describing preferred embodiments of the invention, but it is to be understood that the description is intended to illustrate further features and advantages of the invention and is not intended to limit the scope of the claims.
Example 1
A method for screening subblock images specifically comprises the following steps:
for at least one sub-block image in the multi-viewpoint sub-block spliced image, acquiring the information of the sub-block image from the code stream, wherein the information comprises the following steps: a width of 128, for example, a height of 64, position information (xp, yp) of an upper left pixel of the subblock image in the multi-view subblock stitched image of the multi-view subblock, 256,64, position information (xs, ys) of an upper left pixel of the subblock image in the source view image of (0,0), camera parameters and depth parameters of the source view image to which the subblock image belongs, wherein the camera parameters include a camera orientation, camera position coordinates, and camera intrinsic parameters, and the depth parameters include a nearest depth z _ near of the image of 0.1 meter, for example, and a farthest depth z _ far of 0.2 meter, for example;
acquiring a multi-viewpoint sub-block spliced image from the code stream, and extracting a sub-block image with width and height from the (xp, yp) position of the multi-viewpoint sub-block spliced image according to the information of the sub-block image, wherein the sub-block image corresponds to the sub-block image with width and height from the (xs, ys) position in the source viewpoint image;
for a target image, acquiring relevant information thereof, including: the width information width _ o is 2048, for example, the height information height _ o is 2048, for example, the camera parameters of the target image, and for the subblock image, the depth coverage of four boundary vertices is determined: representative depth parameters including sub-block images: z _ near _ new and z _ far _ new, such as 0.12 meters, and z _ far _ new such as 0.19 meters, wherein the z _ near and most distant depth values in the sub-block image depth coverage and the source view image depth parameter have a z _ near < ═ z _ near _ new < ═ z _ far < ≦ z _ far.
As shown in fig. 3, each representative point of the sub-block image has two spatial representative points in combination with the nearest depth value z _ near _ new and the farthest depth value z _ far _ new of the depth coverage, and thus four boundary vertices (a)j,bj) J is an integer of 0 to 3, as follows:
(a0,b0)=(xs,ys)=(0,0),
(a1,b1)=(xs+width,ys)=(128,0),
(a2,b2)=(xs,ys+height)=(0,64),
(a3,b3)=(xs+width,ys+height)=(128,64).
corresponding to eight space representative points (x)i,yi,zi) I is an integer of 0 to 7, as follows:
(x0,y0,z0)=(a0,b0,z_near_new)=(0,0,0.12),
(x1,y1,z1)=(a1,b1,z_near_new)=(128,0,0.12),
(x2,y2,z2)=(a2,b2,z_near_new)=(0,64,0.12),
(x3,y3,z3)=(a3,b3,z_near_new)=(128,64,0.12),
(x4,y4,z4)=(a0,b0,z_far_new)=(0,0,0.19),
(x5,y5,z5)=(a1,b1,z_far_new)=(128,0,0.19),
(x6,y6,z6)=(a2,b2,z_far_new)=(0,64,0.19),
(x7,y7,z7)=(a3,b3,z_far_new)=(128,64,0.19),
projecting the eight space representative points to the position of the target image by using the camera parameter relation between the source viewpoint image and the target image, wherein the coordinate projected to the position of the target image is (xo)i,yoi) And i is an integer from 0 to 7, wherein the origin coordinate (0,0) of the position coordinate of the target image is the upper left corner of the image, and whether the sub-block image is used for generating the target image is pre-judged, wherein the judgment conditions are as follows:
the eight space representative points corresponding to the sub-block images are projected to the coordinate position (xo) of the plane where the target viewpoint image is locatedi,yoi), xomin,xomaxAre respectively eight (xo)i,yoi) Minimum and maximum values of abscissa, yomin,yomaxAre respectively eight (xo)i,yoi) The minimum maximum value of the ordinate if any of the following conditions is satisfied:
(1)xomax<0
(2)xomin>width_o
(3)yomax<0
(4)yomin>height_o,
such as xomaxIs-2 or xomin2050 or yomaxEither-30 or yominWhen the sub-block image is determined to have no area overlap with the target image at 3000, the sub-block image is not used for the subsequent rendering target image, otherwise the sub-block image is used for the subsequent rendering target image.
Example 2
A method for screening subblock images specifically comprises the following steps:
for at least one sub-block image in the multi-viewpoint sub-block spliced image, acquiring the information of the sub-block image from the code stream, wherein the information comprises the following steps: a width of 128, for example, a height of 64, position information (xp, yp) of an upper left pixel of the subblock image in the multi-view subblock stitched image of the multi-view subblock, 256,64, position information (xs, ys) of an upper left pixel of the subblock image in the source view image of (0,0), camera parameters and depth parameters of the source view image to which the subblock image belongs, wherein the camera parameters include a camera orientation, camera position coordinates, and camera intrinsic parameters, and the depth parameters include a nearest depth z _ near of the image of 0.1 meter, for example, and a farthest depth z _ far of 0.2 meter, for example;
acquiring a multi-viewpoint sub-block spliced image from the code stream, and extracting a sub-block image with width and height from the (xp, yp) position of the multi-viewpoint sub-block spliced image according to the information of the sub-block image, wherein the sub-block image corresponds to the sub-block image with width and height from the (xs, ys) position in the source viewpoint image;
for a target image, acquiring relevant information thereof, including: the width information width _ o is 2048, for example, the height information height _ o is 2048, for example, the camera parameters of the target image, and for the subblock image, the depth coverage of four boundary vertices is determined: representative depth parameters including sub-block images: z _ near _ new and z _ far _ new, z _ near _ new being equal to z _ near such as 0.1 meter and z _ far _ new being equal to z _ far such as 0.2 meter,
each representative point of the sub-block image incorporates the nearest depth value z _ near _ new and of the depth coverage rangeThe farthest depth value z _ far _ new has two spatial representative points, and thus four boundary vertices (a)j,bj) J is an integer of 0 to 3, as follows:
(a0,b0)=(xs,ys)=(0,0),
(a1,b1)=(xs,ys+height)=(0,64)
(a2,b2)=(xs+width,ys)=(128,0),,
(a3,b3)=(xs+width,ys+height)=(128,64).
corresponding to eight space representative points (x)i,yi,zi) I is an integer of 0 to 7, as follows:
(x0,y0,z0)=(a0,b0,z_near_new)=(0,0,0.1),
(x1,y1,z1)=(a3,b3,z_near_new)=(128,64,0.1),
(x2,y2,z2)=(a2,b2,z_near_new)=(128 0,0.1),
(x3,y3,z3)=(a1,b1,z_near_new)=(0,64,0.1),
(x4,y4,z4)=(a0,b0,z_far_new)=(0,0,0.2),
(x5,y5,z5)=(a1,b1,z_far_new)=(0,64,0.2),
(x6,y6,z6)=(a2,b2,z_far_new)=(128,0,0.2),
(x7,y7,z7)=(a3,b3,z_far_new)=(128,64,0.2),
projecting the eight space representative points to the position of the target image by using the camera parameter relation between the source viewpoint image and the target image, wherein the coordinate projected to the position of the target image is (xo)i,yoi) I is an integer of 0 to 7, wherein the origin of the position coordinates of the target imageThe point coordinate (0,0) is the upper left corner of the image, whether the sub-block image is used for generating the target image is judged in advance, and the judgment conditions are as follows:
the eight space representative points corresponding to the sub-block images are projected to the coordinate position (xo) of the plane where the target viewpoint image is locatedi,yoi), xomin,xomaxAre respectively eight (xo)i,yoi) Minimum and maximum values of abscissa, yomin,yomaxAre respectively eight (xo)i,yoi) The minimum maximum value of the ordinate if any of the following conditions is satisfied:
(1)xomax<0
(2)xomin>width_o
(3)yomax<0
(4)yomin>height_o,
such as xomaxIs-2 or xomin2050 or yomaxEither-30 or yominWhen the sub-block image is determined to have no area overlap with the target image at 3000, the sub-block image is not used for the subsequent rendering target image, otherwise the sub-block image is used for the subsequent rendering target image.
Example 3
A method for screening subblock images specifically comprises the following steps:
for at least one sub-block image in the multi-viewpoint sub-block spliced image, acquiring the information of the sub-block image from the code stream, wherein the information comprises the following steps: a width of 128, for example, a height of 64, position information (xp, yp) of an upper left pixel of the subblock image in the multi-view subblock stitched image of the multi-view subblock, 256,64, position information (xs, ys) of an upper left pixel of the subblock image in the source view image of (0,0), camera parameters and depth parameters of the source view image to which the subblock image belongs, wherein the camera parameters include a camera orientation, camera position coordinates, and camera intrinsic parameters, and the depth parameters include a nearest depth z _ near of the image of 0.1 meter, for example, and a farthest depth z _ far of 0.2 meter, for example;
acquiring a multi-viewpoint sub-block spliced image from the code stream, and extracting a sub-block image with width and height from the (xp, yp) position of the multi-viewpoint sub-block spliced image according to the information of the sub-block image, wherein the sub-block image corresponds to the sub-block image with width and height from the (xs, ys) position in the source viewpoint image;
for a target image, acquiring relevant information thereof, including: the width information width _ o is 2048, for example, the height information height _ o is 2048, for example, the camera parameters of the target image, and for the subblock image, the depth coverage of four boundary vertices is determined: representative depth parameters including sub-block images: z _ near _ new and z _ far _ new, z _ near _ new being equal to a minimum depth value of the depth range within the sub-block image, such as 0.13 meter, and z _ far _ new being equal to a maximum depth value of the depth range within the sub-block image, such as 0.18 meter,
each representative point of the sub-block image has two spatial representative points in combination with the nearest depth value z _ near _ new and the farthest depth value z _ far _ new of the depth coverage, and thus four boundary vertices (a)j,bj) J is an integer of 0 to 3, as follows:
(a0,b0)=(xs,ys)=(0,0),
(a1,b1)=(xs+width,ys)=(128,0),
(a2,b2)=(xs,ys+height)=(0,64),
(a0,b0)=(xs+width,ys+height)=(128,64).
corresponding to eight space representative points (x)i,yi,zi) I is an integer of 0 to 7, as follows:
(x0,y0,z0)=(a0,b0,z_far_new)=(0,0,0.18),
(x1,y1,z1)=(a1,b1,z_far_new)=(128 0,0.18),
(x2,y2,z2)=(a2,b2,z_far_new)=(0,64,0.18),
(x3,y3,z3)=(a3,b3,z_far_new)=(128,64,0.18),
(x4,y4,z4)=(a0,b0,z_near_new)=(0,0,0.13),
(x5,y5,z5)=(a1,b1,z_near_new)=(128,0,0.13),
(x6,y6,z6)=(a2,b2,z_near_new)=(0,64,0.13),
(x7,y7,z7)=(a3,b3,z_near_new)=(128,64,0.13),
projecting the eight space representative points to the position of the target image by using the camera parameter relation between the source viewpoint image and the target image, wherein the coordinate projected to the position of the target image is (xo)i,yoi) And i is an integer from 0 to 7, wherein the origin coordinate (0,0) of the position coordinate of the target image is the upper left corner of the image, and whether the sub-block image is used for generating the target image is pre-judged, wherein the judgment conditions are as follows:
the eight space representative points corresponding to the sub-block images are projected to the coordinate position (xo) of the plane where the target viewpoint image is locatedi,yoi), xomin,xomaxAre respectively eight (xo)i,yoi) Minimum and maximum values of abscissa, yomin,yomaxAre respectively eight (xo)i,yoi) The minimum maximum value of the ordinate if any of the following conditions is satisfied:
(1)xomax<0
(2)xomin>width_o
(3)yomax<0
(4)yomin>height_o,
such as xomaxIs-2 or xomin2050 or yomaxEither-30 or yominWhen the sub-block image is determined to have no area overlap with the target image at 3000, the sub-block image is not used for the subsequent rendering target image, otherwise the sub-block image is used for the subsequent rendering target image.
Example 4
A method for screening subblock images specifically comprises the following steps:
for at least one sub-block image in the multi-viewpoint sub-block spliced image, acquiring the information of the sub-block image from the code stream, wherein the information comprises the following steps: a width of 128, for example, a height of 64, position information (xp, yp) of an upper left pixel of the subblock image in the multi-view subblock stitched image of the multi-view subblock, 256,64, position information (xs, ys) of an upper left pixel of the subblock image in the source view image of (0,0), camera parameters and depth parameters of the source view image to which the subblock image belongs, wherein the camera parameters include a camera orientation, camera position coordinates, and camera intrinsic parameters, and the depth parameters include a nearest depth z _ near of the image of 0.1 meter, for example, and a farthest depth z _ far of 0.2 meter, for example;
acquiring a multi-viewpoint sub-block spliced image from the code stream, and extracting a sub-block image with width and height from the (xp, yp) position of the multi-viewpoint sub-block spliced image according to the information of the sub-block image, wherein the sub-block image corresponds to the sub-block image with width and height from the (xs, ys) position in the source viewpoint image;
for a target image, acquiring relevant information thereof, including: the width information width _ o is 2048, for example, the height information height _ o is 2048, for example, the camera parameters of the target image, and for the subblock image, the depth coverage of four boundary vertices is determined: representative depth parameters including sub-block images: z _ near _ new and z _ far _ new, wherein z _ near _ new is equal to the depth minimum value of the depth range in the sub-block image, such as 0.13 m, and z _ far _ new is equal to the depth maximum value of the depth range in the sub-block image, such as 0.18 m, and the nearest depth of the sub-block image and the farthest depth of the sub-block image can be obtained by directly decoding from the code stream;
each representative point of the sub-block image has two spatial representative points in combination with the nearest depth value z _ near _ new and the farthest depth value z _ far _ new of the depth coverage, and thus four boundary vertices (a)j,bj) J is an integer of 0 to 3, as follows:
(a0,b0)=(xs,ys)=(0,0),
(a1,b1)=(xs+width,ys)=(128,0),
(a2,b2)=(xs,ys+height)=(0,64),
(a3,b3)=(xs+width,ys+height)=(128,64).
corresponding to eight space representative points (x)i,yi,zi) I is an integer of 0 to 7, as follows:
(x0,y0,z0)=(a0,b0,z_near_new)=(0,0,0.13),
(x1,y1,z1)=(a1,b1,z_near_new)=(128,0,0.13),
(x2,y2,z2)=(a2,b2,z_near_new)=(0,64,0.13),
(x3,y3,z3)=(a3,b3,z_near_new)=(128,64,0.13),
(x4,y4,z4)=(a0,b0,z_far_new)=(0,0,0.18),
(x5,y5,z5)=(a1,b1,z_far_new)=(128,0,0.18),
6,y6,z6)=(a2,b2,z_far_new)=(0,64,0.18),
(x7,y7,z7)=(a3,b3,z_far_new)=(128,64,0.18),
projecting the eight space representative points to the position of the target image by using the camera parameter relation between the source viewpoint image and the target image, wherein the coordinate projected to the position of the target image is (xo)i,yoi) And i is an integer from 0 to 7, wherein the origin coordinate (0,0) of the position coordinate of the target image is the upper left corner of the image, and whether the sub-block image is used for generating the target image is pre-judged, wherein the judgment conditions are as follows:
the eight space representative points corresponding to the sub-block images are projected to the coordinate position (xo) of the plane where the target viewpoint image is locatedi,yoi), xomin,xomaxAre respectively eight (xo)i,yoi) Minimum and maximum values of abscissa, yomin,yomaxAre respectively eight (xo)i,yoi) The minimum maximum value of the ordinate if any of the following conditions is satisfied:
(1)xomax<0
(2)xomin>width_o
(3)yomax<0
(4)yomin>height_o,
such as xomaxIs-2 or xomin2050 or yomaxEither-30 or yominWhen the sub-block image is determined to have no area overlap with the target image at 3000, the sub-block image is not used for the subsequent rendering target image, otherwise the sub-block image is used for the subsequent rendering target image.
Example 5
As shown in fig. 4, an apparatus for screening subblock images specifically includes:
the subblock image information extraction module inputs a multi-view subblock spliced image code stream and outputs at least one subblock image information, and the information comprises: a width of 128, for example, a height of 64, position information (xp, yp) of an upper left pixel of the subblock image in the multi-view subblock stitched image of the multi-view subblock, 256,64, position information (xs, ys) of an upper left pixel of the subblock image in the source view image of (0,0), camera parameters and depth parameters of the source view image to which the subblock image belongs, wherein the camera parameters include a camera orientation, camera position coordinates, and camera intrinsic parameters, and the depth parameters include a nearest depth z _ near of the image of 0.1 meter, for example, and a farthest depth z _ far of 0.2 meter, for example;
the subblock image extracting module is used for inputting a multi-viewpoint subblock spliced image code stream and subblock image information, outputting at least one subblock image, and extracting a subblock image with width and height from the (xp, yp) position of the multi-viewpoint subblock spliced image, wherein the subblock image corresponds to a subblock image with width and height from the (xs, ys) position in the source viewpoint image;
the target image related information acquisition module is used for acquiring width information width _ o, height information height _ o and camera parameters of the target image;
the subblock image depth parameter acquiring module is used for acquiring two depth parameters z _ near _ new and z _ far _ new, wherein the z _ near _ new is less than or equal to z _ far _ new;
and the subblock image judging module inputs information of subblock images, subblock images and target images, the code stream information is selectable, and the output is partial target images obtained by rendering pixels in the subblock images. For the target image, acquiring relevant information thereof, including: the width information width _ o is 2048, for example, the height information height _ o is 2048, for example, the camera parameters of the target image, and for the subblock image, the depth coverage of four boundary vertices is determined: representative depth parameters including sub-block images: z _ near _ new and z _ far _ new, such as 0.12 meters, and z _ far _ new such as 0.19 meters, wherein the z _ near and most distant depth values in the source view image depth parameter and the closest depth value z _ near and most distant depth value z _ far relationship of the sub-block image depth coverage are z _ near < (z _ near _ new < (z _ far) > -z _ far < (z _ far) >,
each representative point of the sub-block image has two spatial representative points in combination with the nearest depth value z _ near _ new and the farthest depth value z _ far _ new of the depth coverage, and thus four boundary vertices (a)j,bj) J is an integer of 0 to 3, as follows:
(a0,b0)=(xs,ys)=(0,0),
(a1,b1)=(xs+width,ys)=(128,0),
(a2,b2)=(xs,ys+height)=(0,64),
(a3,b3)=(xs+width,ys+height)=(128,64).
corresponding to eight space representative points (x)i,yi,zi) I is an integer of 0 to 7, as follows:
(x0,y0,z0)=(a0,b0,z_near_new)=(0,0,0.12),
(x1,y1,z1)=(a1,b1,z_near_new)=(128,0,0.12),
(x2,y2,z2)=(a2,b2,z_near_new)=(0,64,0.12),
(x3,y3,z3)=(a3,b3,z_near_new)=(128,64,0.12),
(x4,y4,z4)=(a0,b0,a_far_new)=(0,0,0.19),
(x5,y5,z5)=(a1,b1,z_far_new)=(128,0,0.19),
(x6,y6,z6)=(a2,b2,z_far_new)=(0,64,0.19),
(x7,y7,z7)=(a3,b3,z_far_new)=(128,64,0.19),
projecting the eight space representative points to the position of the target image by using the camera parameter relation between the source viewpoint image and the target image, wherein the coordinate projected to the position of the target image is (xo)i,yoi) And i is an integer from 0 to 7, wherein the origin coordinate (0,0) of the position coordinate of the target image is the upper left corner of the image, and whether the sub-block image is used for generating the target image is pre-judged, wherein the judgment conditions are as follows:
the eight space representative points corresponding to the sub-block images are projected to the coordinate position (xo) of the plane where the target viewpoint image is locatedi,yoi),xomin,xomaxAre respectively eight (xo)i,yoi) Minimum and maximum values of abscissa, yomin,yomaxAre respectively eight (xo)i,yoi) The minimum maximum value of the ordinate if any of the following conditions is satisfied:
(1)xomax<0
(2)xomin>width_o
(3)yomax<0
(4)yomin>height_o,
such as xomaxIs-2 or xomin2050 or yomaxEither-30 or yominWhen the sub-block image is determined to have no area overlap with the target image at 3000, the sub-block image is not used for the subsequent rendering target image, otherwise the sub-block image is used for the subsequent rendering target image.
Example 6
The device for screening the subblock images specifically comprises the following steps:
the subblock image information extraction module inputs a multi-view subblock spliced image code stream and outputs at least one subblock image information, and the information comprises: a width of 128, for example, a height of 64, position information (xp, yp) of an upper left pixel of the subblock image in the multi-view subblock stitched image of the multi-view subblock, 256,64, position information (xs, ys) of an upper left pixel of the subblock image in the source view image of (0,0), camera parameters and depth parameters of the source view image to which the subblock image belongs, wherein the camera parameters include a camera orientation, camera position coordinates, and camera intrinsic parameters, and the depth parameters include a nearest depth z _ near of the image of 0.1 meter, for example, and a farthest depth z _ far of 0.2 meter, for example;
the subblock image extracting module is used for inputting a multi-viewpoint subblock spliced image code stream and subblock image information, outputting at least one subblock image, and extracting subblocks with width and height from (xp, yp) positions of the multi-viewpoint subblock spliced image, wherein the subblocks correspond to subblock images with width and height from (xs, ys) positions in a source viewpoint image;
and the subblock image judging module inputs information of subblock images, subblock images and target images, the code stream information is selectable, and the output is partial target images obtained by rendering pixels in the subblock images. For a target image, acquiring relevant information thereof, including: the width information width _ o is 2048, for example, the height information height _ o is 2048, for example, the camera parameters of the target image, and for the subblock image, the depth coverage of four boundary vertices is determined: representative depth parameters including sub-block images: z _ near _ new and z _ far _ new, z _ near _ new being equal to z _ near such as 0.1 meter and z _ far _ new being equal to z _ far such as 0.2 meter,
each representative point of the sub-block image has two spatial representative points in combination with the nearest depth value z _ near _ new and the farthest depth value z _ far _ new of the depth coverage, and thus four boundary vertices (a)j,bj) J is an integer of 0 to 3, as follows:
(a0,b0)=(xs,ys)=(0,0),
(a1,b1)=(xs,ys+height)=(0,64)
(a2,b2)=(xs+width,ys)=(128,0),
(a3,b3)=(xs+width,ys+height)=(128,64).
corresponding to eight space representative points (x)i,yi,zi) I is an integer of 0 to 7, as follows:
(x0,y0,z0)=(a0,b0,z_near_new)=(0,0,0.1),
(x1,y1,z1)=(a3,b3,z_near_new)=(128,64,0.1),
(x2,y2,z2)=(a2,b2,z_near_new)=(128 0,0.1),
(x3,y3,z3)=(a1,b1,z_near_new)=(0,64,0.1),
(x4,y4,z4)=(a0,b0,z_far_new)=(0,0,0.2),
(x5,y5,z5)=(a1,b1,z_far_new)=(0,64,0.2),
(x6,y6,z6)=(a2,b2,z_far_new)=(128,0,0.2),
(x7,y7,z7)=(a3,b3,z_far_new)=(128,64,0.2),
the eight space representative points corresponding to the sub-block images are projected to the coordinate position (xo) of the plane where the target viewpoint image is locatedi,yoi), xomin,xomaxAre respectively eight (xo)i,yoi) Minimum and maximum values of abscissa, yomin,yomaxAre respectively eight (xo)i,yoi) The minimum maximum value of the ordinate if any of the following conditions is satisfied:
(1)xomax<0
(2)xomin>width_o
(3)yomax<0
(4)yomin>height_o,
such as xomaxIs-2 or xomin2050 or yomaxEither-30 or yominWhen the sub-block image is determined to have no area overlap with the target image at 3000, the sub-block image is not used for the subsequent rendering target image, otherwise the sub-block image is used for the subsequent rendering target image.
Example 7
The device for screening the subblock images specifically comprises the following steps:
the subblock image information extraction module inputs a multi-view subblock spliced image code stream and outputs at least one subblock image information, and the information comprises: a width of 128, for example, a height of 64, position information (xp, yp) of an upper left pixel of the subblock image in the multi-view subblock stitched image of the multi-view subblock, 256,64, position information (xs, ys) of an upper left pixel of the subblock image in the source view image of (0,0), camera parameters and depth parameters of the source view image to which the subblock image belongs, wherein the camera parameters include a camera orientation, camera position coordinates, and camera intrinsic parameters, and the depth parameters include a nearest depth z _ near of the image of 0.1 meter, for example, and a farthest depth z _ far of 0.2 meter, for example;
the subblock image extracting module is used for inputting a multi-viewpoint subblock spliced image code stream and subblock image information, outputting at least one subblock image, and extracting a subblock image with width and height from the (xp, yp) position of the multi-viewpoint subblock spliced image, wherein the subblock image corresponds to a subblock image with width and height from the (xs, ys) position in the source viewpoint image;
and the subblock image judging module inputs information of subblock images, subblock images and target images, the code stream information is selectable, and the output is partial target images obtained by rendering pixels in the subblock images. For a target image, acquiring relevant information thereof, including: the width information width _ o is 2048, for example, the height information height _ o is 2048, for example, the camera parameters of the target image, and for the subblock image, the depth coverage of four boundary vertices is determined: representative depth parameters including sub-block images: z _ near _ new and z _ far _ new, z _ near _ new being equal to a minimum depth value of the depth range within the sub-block image, such as 0.13 meter, and z _ far _ new being equal to a maximum depth value of the depth range within the sub-block image, such as 0.18 meter,
each representative point of the sub-block image has two spatial representative points in combination with the nearest depth value z _ near _ new and the farthest depth value z _ far _ new of the depth coverage, and thus four boundary vertices (a)j,bj) J is an integer of 0 to 3, as follows:
(a0,b0)=(xs,ys)=(0,0),
(a1,b1)=(xs+width,ys)=(128,0),
(a2,b2)=(xs,ys+height)=(0,64),
(a0,b0)=(xs+width,ys+height)=(128,64).
corresponding to eight space representative points (x)i,yi,zi) I is an integer of 0 to 7, as follows:
(x0,y0,z0)=(a0,b0,z_far_new)=(0,0,0.18),
(x1,y1,z1)=(a1,b1,z_far_new)=(128 0,0.18),
(x2,y2,z2)=(a2,b2,z_far_new)=(0,64,0.18),
(x3,y3,z3)=(a3,b3,z_far_new)=(128,64,0.18),
(x4,y4,z4)=(a0,b0,z_near_new)=(0,0,0.13),
(x5,y5,z5)=(a1,b1,z_near_new)=(128,0,0.13),
(x6,y6,z6)=(a2,b2,z_near_new)=(0,64,0.13),
(x7,y7,z7)=(a3,b3,z_near_new)=(128,64,0.13),
projecting the eight space representative points to the position of the target image by using the camera parameter relation between the source viewpoint image and the target image, wherein the coordinate projected to the position of the target image is (xo)i,yoi) And i is an integer from 0 to 7, wherein the origin coordinate (0,0) of the position coordinate of the target image is the upper left corner of the image, and whether the sub-block image is used for generating the target image is pre-judged, wherein the judgment conditions are as follows:
the eight space representative points corresponding to the sub-block images are projected to the coordinate position (xo) of the plane where the target viewpoint image is locatedi,yoi), xomin,xomaxAre respectively eight (xo)i,yoi) Minimum and maximum values of abscissa, yomin,yomaxAre respectively eight (xo)i,yoi) The minimum maximum value of the ordinate if any of the following conditions is satisfied:
(1)xomax<0
(2)xomin>width_o
(3)yomax<0
(4)yomin>height_o,
such as xomaxIs-2 or xomin2050 or yomaxEither-30 or yominWhen the sub-block image is determined to have no area overlap with the target image at 3000, the sub-block image is not used for the subsequent rendering target image, otherwise the sub-block image is used for the subsequent rendering target image.
Example 8
The device for screening the subblock images specifically comprises the following steps:
the subblock image information extraction module inputs a multi-view subblock spliced image code stream and outputs at least one subblock image information, and the information comprises: a width of 128, for example, a height of 64, position information (xp, yp) of an upper left pixel of the subblock image in the multi-view subblock stitched image of the multi-view subblock, 256,64, position information (xs, ys) of an upper left pixel of the subblock image in the source view image of (0,0), camera parameters and depth parameters of the source view image to which the subblock image belongs, wherein the camera parameters include a camera orientation, camera position coordinates, and camera intrinsic parameters, and the depth parameters include a nearest depth z _ near of the image of 0.1 meter, for example, and a farthest depth z _ far of 0.2 meter, for example;
the subblock image extracting module is used for inputting a multi-viewpoint subblock spliced image code stream and subblock image information, outputting at least one subblock image, and extracting a subblock image with width and height from the (xp, yp) position of the multi-viewpoint subblock spliced image, wherein the subblock image corresponds to a subblock image with width and height from the (xs, ys) position in the source viewpoint image;
and the subblock image judging module inputs information of subblock images, subblock images and target images, the code stream information is selectable, and the output is partial target images obtained by rendering pixels in the subblock images. For a target image, acquiring relevant information thereof, including: the width information width _ o is 2048, for example, the height information height _ o is 2048, for example, the camera parameters of the target image, and for the subblock image, the depth coverage of four boundary vertices is determined: representative depth parameters including sub-block images: z _ near _ new and z _ far _ new, wherein z _ near _ new is equal to the depth minimum value of the depth range in the sub-block image, such as 0.13 m, and z _ far _ new is equal to the depth maximum value of the depth range in the sub-block image, such as 0.18 m, and the nearest depth of the sub-block image and the farthest depth of the sub-block image can be obtained by directly decoding from the code stream;
each representative point of the sub-block image has two spatial representative points in combination with the nearest depth value z _ near _ new and the farthest depth value z _ far _ new of the depth coverage, and thus four boundary vertices(aj,bj) J is an integer of 0 to 3, as follows:
(a0,b0)=(xs,ys)=(0,0),
(a1,b1)=(xs+width,ys)=(128,0),
(a2,b2)=(xs,ys+height)=(0,64),
(a3,b3)=(xs+width,ys+height)=(128,64).
corresponding to eight space representative points (x)i,yi,zi) I is an integer of 0 to 7, as follows:
(x0,y0,z0)=(a0,b0,z_near_new)=(0,0,0.13),
(x1,y1,z1)=(a1,b1,z_near_new)=(128,0,0.13),
(x2,y2,z2)=(a2,b2,z_near_new)=(0,64,0.13),
(x3,y3,z3)=(a3,b3,z_near_new)=(128,64,0.13),
(x4,y4,z4)=(a0,b0,z_far_new)=(0,0,0.18),
(x5,y5,z5)=(a1,b1,z_far_new)=(128,0,0.18),
(x6,y6,z6)=(a2,b2,z_far_new)=(0,64,0.18),
(x7,y7,z7)=(a3,b3,z_far_new)=(128,64,0.18),
projecting the eight space representative points to the position of the target image by using the camera parameter relation between the source viewpoint image and the target image, wherein the coordinate projected to the position of the target image is (xo)i,yoi) I is an integer of 0 to 7, wherein the origin coordinate (0,0) of the position coordinate of the target image is the upper left corner of the image, and whether the sub-block image is used or not is predeterminedGenerating a target image, wherein the judgment conditions are as follows:
the eight space representative points corresponding to the sub-block images are projected to the coordinate position (xo) of the plane where the target viewpoint image is locatedi,yoi), xomin,xomaxAre respectively eight (xo)i,yoi) Minimum and maximum values of abscissa, yomin,yomaxAre respectively eight (xo)i,yoi) The minimum maximum value of the ordinate if any of the following conditions is satisfied:
(1)xomax<0
(2)yomin>width_o
(3)yomax<0
(4)yomin>height_o,
such as xomaxIs-2 or xomin2050 or yomaxEither-30 or yominWhen the sub-block image is determined to have no area overlap with the target image at 3000, the sub-block image is not used for the subsequent rendering target image, otherwise the sub-block image is used for the subsequent rendering target image.
Example 9
A method for screening subblock images specifically comprises the following steps:
for at least one sub-block image in the multi-viewpoint sub-block spliced image, acquiring the information of the sub-block image from the code stream, wherein the information comprises the following steps: width information width, height information height, position information (xp, yp) of an upper left pixel of the subblock image in the multi-view subblock stitched image, position information (xs, ys) of the upper left pixel of the subblock image in the source view image, camera parameters and depth parameters of the source view image to which the subblock image belongs, wherein the camera parameters comprise camera orientation, camera position coordinates and camera parameters, and the depth parameters comprise a nearest depth z _ near and a farthest depth z _ far of the image;
acquiring a multi-viewpoint sub-block spliced image from the code stream, and extracting a sub-block image with width and height from the (xp, yp) position of the multi-viewpoint sub-block spliced image according to the information of the sub-block image, wherein the sub-block image corresponds to the sub-block image with width and height from the (xs, ys) position in the source viewpoint image;
for a target image, acquiring relevant information thereof, including: width information width _ o, height information height _ o and camera parameters of the target image, and determining the depth coverage range of four boundary vertexes for the sub-block images: representative depth parameters of the subblock image: z _ near _ new and z _ far _ new, z _ near _ new being equal to the nearest depth z _ near of the source view image, z _ far _ new being equal to the farthest depth z _ far of the source view image,
each representative point of the sub-block image has two spatial representative points in combination with the nearest depth value z _ near _ new and the farthest depth value z _ far _ new of the depth coverage, and thus four boundary vertices (a)j,bj) J is an integer of 0 to 3, as follows:
(a0,b0)=(xs,ys),
(a1,b1)=(xs+width,ys),
(a2,b2)=(xs,ys+height),
(a0,b0)=(xs+width,ys+height).
corresponding to eight space representative points (x)i,yi,zi) I is an integer of 0 to 7, as follows:
(x0,y0,z0)=(a0,b0,z_far_new),
(x1,y1,z1)=(a1,b1,z_far_new),
(x2,y2,z2)=(a2,b2,z_far_new),
(x3,y3,z3)=(a3,b3,z_far_new),
(x4,y4,z4)=(a0,b0,z_near_new),
(x5,y5,z5)=(a1,b1,z_near_new),
(x6,y6,z6)=(a2,b2,z_near_new),
(x7,y7,z7)=(a3,b3,z_near_new),
projecting the eight space representative points to the position of the target image by using the camera parameter relation between the source viewpoint image and the target image, wherein the coordinate projected to the position of the target image is (xo)i,yoi) And i is an integer from 0 to 7, wherein the origin coordinate (0,0) of the position coordinate of the target image is the upper left corner of the image, and whether the sub-block image is used for generating the target image is pre-judged, wherein the judgment conditions are as follows:
the eight space representative points corresponding to the sub-block images are projected to the coordinate position (xo) of the plane where the target viewpoint image is locatedi,yoi), xomin,xomaxAre respectively eight (xo)i,yoi) Minimum and maximum values of abscissa, yomin,yomaxAre respectively eight (xo)i,yoi) The minimum maximum value of the ordinate if any of the following conditions is satisfied:
(1)xomax<0
(2)xomin>width_o
(3)yomax<0
(4)yomin>height_o,
the subblock image is not used for the subsequent rendering of the target image if it is determined that the subblock image is not overlapped with the target image in a region, otherwise the subblock image is used for the subsequent rendering of the target image.
The above method is applied to panoramic video Technicolormuseum in an MPEG test sequence, and the results of testing windows (postrack) that render different paths are as follows, and the values in the table are the average values of 32 frames.
TABLE 1 Posetrace1 pixel savings rates
0-31 Frame 32-63Frame 64-96Frame
Number of pixels remaining after prescreening/(number) 2868224 2915776 2918528
Total number of pixels/(number) 3098112 3047424 3047424
Rate of saving 7.42% 4.32% 4.23%
Table 2 Posetrace2 pixel saving rate
0-31 Frame 32-63Frame 64-96Frame
Number of pixels remaining after prescreening/(number) 2977600 2844224 2837248
Total number of pixels/(number) 3098112 3047424 3047424
Rate of saving 3.89% 6.67% 6.90%
Example 10
A method for screening subblock images specifically comprises the following steps:
for at least one sub-block image in the multi-viewpoint sub-block spliced image, acquiring the information of the sub-block image from the code stream, wherein the information comprises the following steps: width information width, height information height, position information (xp, yp) of an upper left pixel of the subblock image in the multi-view subblock stitched image, position information (xs, ys) of the upper left pixel of the subblock image in the source view image, camera parameters and depth parameters of the source view image to which the subblock image belongs, wherein the camera parameters comprise camera orientation, camera position coordinates and camera parameters, and the depth parameters comprise a nearest depth z _ near and a farthest depth z _ far of the image;
acquiring a multi-viewpoint sub-block spliced image from the code stream, and extracting a sub-block image with width and height from the (xp, yp) position of the multi-viewpoint sub-block spliced image according to the information of the sub-block image, wherein the sub-block image corresponds to the sub-block image with width and height from the (xs, ys) position in the source viewpoint image;
for a target image, acquiring relevant information thereof, including: width information width _ o, height information height _ o and camera parameters of the target image, and determining the depth coverage range of four boundary vertexes for the sub-block images: representative depth parameters of the subblock image: z _ near _ new and z _ far _ new, z _ near _ new being equal to the nearest depth of the sub-block image, z _ far _ new being equal to the farthest depth of the sub-block image,
each representative point of the sub-block image has two spatial representative points in combination with the nearest depth value z _ near _ new and the farthest depth value z _ far _ new of the depth coverage, and thus four boundary vertices (a)j,bj) J is an integer of 0 to 3, as follows:
(a0,b0)=(xs,ys),
(a1,b1)=(xs,ys+height),
(a2,b2)=(xs+width,ys),
(a0,b0)=(xs+width,ys+height).
corresponding to eight space representative points (x)i,yi,zi) I is an integer of 0 to 7, as follows:
(x0,y0,z0)=(a1,b1,z_far_new),
(x1,y1,z1)=(a3,b3,z_far_new),
(x2,y2,x2)=(a2,b2,z_far_new),
(x3,y3,z3)=(a3,b3,z_far_new),
(x4,y4,z4)=(a1,b1,z_near_new),
(x5,y5,z5)=(a2,b2,z_near_new),
(x6,y6,z6)=(a0,b0,z_near_new),
(x7,y7,z7)=(a3,b3,z_near_new),
projecting the eight space representative points to the position of the target image by using the camera parameter relation between the source viewpoint image and the target image, wherein the coordinate projected to the position of the target image is (xo)i,yoi) And i is an integer from 0 to 7, wherein the origin coordinate (0,0) of the position coordinate of the target image is the upper left corner of the image, and whether the sub-block image is used for generating the target image is pre-judged, wherein the judgment conditions are as follows:
the eight space representative points corresponding to the sub-block images are projected to the coordinate position (xo) of the plane where the target viewpoint image is locatedi,yoi), xomin,xomaxAre respectively eight (xo)i,yoi) Minimum and maximum values of abscissa, yomin,yomaxAre respectively eight (xo)i,yoi) The minimum maximum value of the ordinate if any of the following conditions is satisfied:
(1)xomax<0
(2)xomin>width_o
(3)yomax<0
(4)yomin>height_o,
the subblock image is not used for the subsequent rendering of the target image if it is determined that the subblock image is not overlapped with the target image in a region, otherwise the subblock image is used for the subsequent rendering of the target image.
The above method is applied to panoramic video Technicolormuseum in an MPEG test sequence, and the results of testing windows (postrack) that render different paths are as follows, and the values in the table are the average values of 32 frames.
TABLE 3 Posetrace1 pixel savings rates
0-31 Frame 32-63Frame 64-96Frame
Number of pixels remaining after prescreening/(number) 2777586 2826592 2855040
Total number of pixels/(number) 3098112 3047424 3047424
Rate of saving 10.3% 8.13% 6.31%
TABLE 4 Posetrace2 pixel savings ratio
0-31 Frame 32-63Frame 64-96Frame
Number of pixels remaining after prescreening/(number) 2873856 2844224 2837248
Total number of pixels/(number) 3098112 3047424 3047424
Rate of saving 7.24% 6.67% 6.90%
Example 11
A method for screening subblock images specifically comprises the following steps:
for at least one sub-block image in the multi-viewpoint sub-block spliced image, acquiring width information width and height information height of the sub-block image, position information of the sub-block image in the multi-viewpoint sub-block spliced image, position information of the sub-block image in a source viewpoint image and camera parameters of the source viewpoint image to which the sub-block image belongs from a code stream, wherein the camera parameters comprise camera orientation, camera position coordinates and camera internal parameters; the position information of the sub-block images in the multi-view sub-block stitched image and the source view image can be determined by any pixel position of the sub-block images and the positions corresponding to the multi-view sub-block stitched image and the source view image, such as position point information of pixels at the positions of 1, namely the upper left, the upper right, the lower left, the lower right, the upper left and the right, of the sub-block images in the multi-view sub-block stitched image;
acquiring width information width _ o, height information height _ o and camera parameters of a target image;
obtaining two depth parameters of the sub-block image: z _ near _ new and z _ far _ new, wherein the relationship between the two parameter values satisfies that z _ near _ new is less than or equal to z _ far _ new;
using the camera parameters of the source viewpoint image and the camera parameters of the target image to four boundary vertices (a)j,bj) J is an integer of 0 to 3, as follows:
(a0,b0)=(xs,ys),
(a1,b1)=(xs,ys+height),
(a2,b2)=(xs+width,ys),
(a0,b0)=(xs+width,ys+height).
corresponding to eight space representative points (x)i,yi,zi) I is an integer of 0 to 7, as follows:
(x0,y0,z0)=(a1,b1,z_far_new),
(x1,y1,z1)=(a3,b3,z_far_new),
(x2,y2,z2)=(a2,b2,z_far_new),
(x3,y3,z3)=(a3,b3,z_far_new),
(x4,y4,z4)=(a1,b1,z_near_new),
(x5,y5,z5)=(a2,b2,z_near_new),
(x6,y6,z6)=(a0,b0,z_near_new),
(x7,y7,z7)=(a3,b3,z_near_new),
projecting the eight space representative points to the position of the target image by using the camera parameter relation between the source viewpoint image and the target image, wherein the coordinate projected to the position of the target image is (xo)i,yoi) I is an integer of 0 to 7,
according to the obtained target imagePosition coordinates (xo) of eight representative pointsi,yoi) Whether the sub-block image and the target image have region overlapping is judged in advance; and if the areas are overlapped, rendering by using the pixels in the sub-block images to obtain a part of target images, otherwise, rendering without using the pixels of the sub-block images to obtain a part of target images.
Example 12
A method for screening subblock images specifically comprises the following steps:
for at least one sub-block image in the multi-viewpoint sub-block spliced image, acquiring width information width and height information height of the sub-block image, position information of the sub-block image in the multi-viewpoint sub-block spliced image, position information of the sub-block image in a source viewpoint image and camera parameters of the source viewpoint image to which the sub-block image belongs from a code stream, wherein the camera parameters comprise camera orientation, camera position coordinates and camera internal parameters; the position information of the sub-block images in the multi-view sub-block stitched image and the source view image can be determined by any pixel position of the sub-block images and the positions corresponding to the multi-view sub-block stitched image and the source view image, such as position point information of pixels at the positions of 1, namely the upper left, the upper right, the lower left, the lower right, the upper left and the right, of the sub-block images in the multi-view sub-block stitched image;
acquiring width information width _ o, height information height _ o and camera parameters of a target image;
obtaining two depth parameters of the sub-block image: z _ near _ new and z _ far _ new, wherein the relationship between the two parameter values satisfies that z _ near _ new is less than or equal to z _ far _ new;
using the camera parameters of the source viewpoint image and the camera parameters of the target image to four boundary vertices (a)j,bj) J is an integer of 0 to 3, as follows:
(a0,b0)=(xs,ys),
(a1,b1)=(xs,ys+height),
(a2,b2)=(xs+width,ys),
(a0,b0)=(xs+width,ys+height).
corresponding to eight space representative points (x)i,yi,zi) I is an integer of 0 to 7, as follows:
(x0,y0,z0)=(a1,b1,z_far_new),
(x1,y1,z1)=(a3,b3,z_far_new),
(x2,y2,z2)=(a2,b2,z_far_new),
(x3,y3,z3)=(a3,b3,z_far_new),
(x4,y4,z4)=(a1,b1,z_near_new),
(x5,y5,z5)=(a2,b2,z_near_new),
(x6,y6,z6)=(a0,b0,z_near_new),
(x7,y7,z7)=(a3,b3,z_near_new),
projecting the eight space representative points to the position of the target image by using the camera parameter relation between the source viewpoint image and the target image, wherein the coordinate projected to the position of the target image is (xo)i,yoi) I is an integer of 0 to 7,
according to the position coordinates (xo) of eight representative points in the obtained target imagei,yoi) Judging whether the sub-block image and the target image have the region overlapping or not in advance, and judging whether the sub-block image and the target image have the region overlapping if any one of the following conditions is met:
(1) the eight representative points projected to the target image are all on the left side of the left boundary of the target image;
(2) the eight representative points projected to the target image are all positioned on the right side of the right boundary of the target image;
(3) the eight representative points projected to the target image are all positioned on the upper side of the upper boundary of the target image;
(4) the eight representative points projected to the target image are all positioned below the lower boundary of the target image;
specifically, when the origin coordinate (0,0) of the position coordinate of the target image is the upper left corner of the image, if any one of the following conditions is satisfied, it is predetermined that the sub-block image overlaps with the target image existing region:
(1)xomax<0;
(2)xomin>width_o;
(3)yomax<0;
(4)yomin>height_o,
wherein xominIs xoiMinimum value of (5), xomaxIs xoiThe maximum value of (1), yominIs yoiYo is the minimum value ofmaxIs yoiWherein i is an integer from 0 to 7, and when the origin coordinate of the position coordinate of the target image is at other positions of the non-upper left corner of the image, a decision formula can be obtained through similar derivation;
and if the areas are overlapped, rendering by using the pixels in the sub-block images to obtain a part of target images, otherwise, rendering without using the pixels of the sub-block images to obtain a part of target images.
Example 13
A method for screening subblock images specifically comprises the following steps:
for at least one sub-block image in the multi-viewpoint sub-block spliced image, acquiring width information width and height information height of the sub-block image, position information of the sub-block image in the multi-viewpoint sub-block spliced image, position information of the sub-block image in a source viewpoint image and camera parameters of the source viewpoint image to which the sub-block image belongs from a code stream, wherein the camera parameters comprise camera orientation, camera position coordinates and camera internal parameters; the position information of the sub-block images in the multi-view sub-block stitched image and the source view image can be determined by any pixel position of the sub-block images and the positions corresponding to the multi-view sub-block stitched image and the source view image, such as position point information of pixels at the positions of 1, namely the upper left, the upper right, the lower left, the lower right, the upper left and the right, of the sub-block images in the multi-view sub-block stitched image;
acquiring width information width _ o, height information height _ o and camera parameters of a target image;
obtaining two depth parameters of the sub-block image: z _ near _ new and z _ far _ new, wherein the relationship between the two parameter values satisfies that z _ near _ new is less than or equal to z _ far _ new, the determination method of the two depth parameters z _ near _ new and z _ far _ new of the sub-block image is one of the following methods:
(1) decoding the code stream to obtain a nearest depth value z _ near of the source view image and a farthest depth value z _ far of the source view image, wherein z _ near _ new of the sub-block image is equal to z _ near, and z _ far _ new of the sub-block image is equal to z _ far;
(2) z _ near _ new of the sub-block image is equal to the nearest depth value of the sub-block image, and z _ far _ new of the sub-block image is equal to the farthest depth value of the sub-block image;
using the camera parameters of the source viewpoint image and the camera parameters of the target image to four boundary vertices (a)j,bj) J is an integer of 0 to 3, as follows:
(a0,b0)=(xs,ys),
(a1,b1)=(xs,ys+height),
(a2,b2)=(xs+width,ys),
(a0,b0)=(xs+width,ys+height).
corresponding to eight space representative points (x)i,yi,zi) I is an integer of 0 to 7, as follows:
(x0,y0,z0)=(a1,b1,z_far_new),
(x1,y1,z1)=(a3,b3,z_far_new),
(x2,y2,z2)=(a2,b2,z_far_new),
(x3,y3,z3)=(a3,b3,z_far_new),
(x4,y4,z4)=(a1,b1,z_near_new),
(x5,y5,z5)=(a2,b2,z_near_new),
(x6,y6,z6)=(a0,b0,z_near_new),
(x7,y7,z7)=(a3,b3,z_near_new),
projecting the eight space representative points to the position of the target image by using the camera parameter relation between the source viewpoint image and the target image, wherein the coordinate projected to the position of the target image is (xo)i,yoi) I is an integer of 0 to 7,
example 14
A method for screening subblock images specifically comprises the following steps:
for at least one sub-block image in the multi-viewpoint sub-block spliced image, acquiring width information width and height information height of the sub-block image, position information of the sub-block image in the multi-viewpoint sub-block spliced image, position information of the sub-block image in a source viewpoint image and camera parameters of the source viewpoint image to which the sub-block image belongs from a code stream, wherein the camera parameters comprise camera orientation, camera position coordinates and camera internal parameters; the position information of the sub-block images in the multi-view sub-block stitched image and the source view image can be determined by any pixel position of the sub-block images and the positions corresponding to the multi-view sub-block stitched image and the source view image, such as position point information of pixels at the positions of 1, namely the upper left, the upper right, the lower left, the lower right, the upper left and the right, of the sub-block images in the multi-view sub-block stitched image;
acquiring width information width _ o, height information height _ o and camera parameters of a target image;
obtaining two depth parameters of the sub-block image: z _ near _ new and z _ far _ new, wherein the relationship between the two parameter values satisfies that z _ near _ new is less than or equal to z _ far _ new, the determination method of the two depth parameters z _ near _ new and z _ far _ new of the sub-block image is one of the following methods:
(1) decoding the code stream to obtain a nearest depth value z _ near of the source view image and a farthest depth value z _ far of the source view image, wherein z _ near _ new of the sub-block image is equal to z _ near, and z _ far _ new of the sub-block image is equal to z _ far;
(2) the z _ near _ new of the sub-block image is equal to the nearest depth value of the sub-block image, the z _ far _ new of the sub-block image is equal to the farthest depth value of the sub-block image, and the nearest depth value of the sub-block image and the farthest depth value of the sub-block image are determined by one of the following methods:
(1) directly decoding from the code stream to obtain the nearest depth value and the farthest depth value of the subblock image;
(2) in decoding the reconstructed depth image, the depth value of the pixel which is closest to the source viewpoint to which the sub-block belongs in all the pixels of the sub-block image is the closest depth value of the sub-block; the depth value of the pixel which is farthest away from the source viewpoint of the sub-block in all the pixels of the sub-block image is the farthest depth value of the sub-block;
using the camera parameters of the source viewpoint image and the camera parameters of the target image to four boundary vertices (a)j,bj) J is an integer of 0 to 3, as follows:
(a0,b0)=(xs,ys),
(a1,b1)=(xs,ys+height),
(a2,b2)=(xs+width,ys),
(a0,b0)=(xs+width,ys+height).
corresponding to eight space representative points (x)i,yi,zi) I is an integer of 0 to 7, as follows:
(x0,y0,z0)=(a1,b1,z_far_new),
(x1,y1,z1)=(a3,b3,z_far_new),
(x2,y2,z2)=(a2,b2,z_far_new),
(x3,y3,z3)=(a3,b3,z_far_new),
(x4,y4,z4)=(a1,b1,z_near_new),
(x5,y5,z5)=(a2,b2,z_near_new),
(x6,y6,z6)=(a0,b0,z_near_new),
(x7,y7,z7)=(a3,b3,z_near_new),
projecting the eight space representative points to the position of the target image by using the camera parameter relation between the source viewpoint image and the target image, wherein the coordinate projected to the position of the target image is (xo)i,yoi) I is an integer of 0 to 7,
according to the position coordinates (xo) of eight representative points in the obtained target imagei,yoi) Whether the sub-block image and the target image have region overlapping is judged in advance; and if the areas are overlapped, rendering by using the pixels in the sub-block images to obtain a part of target images, otherwise, rendering without using the pixels of the sub-block images to obtain a part of target images.
Example 15
A method for screening subblock images specifically comprises the following steps:
for at least one sub-block image in the multi-viewpoint sub-block spliced image, acquiring width information width and height information height of the sub-block image, position information of the sub-block image in the multi-viewpoint sub-block spliced image, position information of the sub-block image in a source viewpoint image and camera parameters of the source viewpoint image to which the sub-block image belongs from a code stream, wherein the camera parameters comprise camera orientation, camera position coordinates and camera internal parameters; the position information of the sub-block images in the multi-view sub-block stitched image and the source view image can be determined by any pixel position of the sub-block images and the positions corresponding to the multi-view sub-block stitched image and the source view image, such as position point information of pixels at the positions of 1, namely the upper left, the upper right, the lower left, the lower right, the upper left and the right, of the sub-block images in the multi-view sub-block stitched image;
acquiring width information width _ o, height information height _ o and camera parameters of a target image;
obtaining two depth parameters of the sub-block image: z _ near _ new and z _ far _ new, wherein the relationship between the two parameter values satisfies that z _ near _ new is less than or equal to z _ far _ new, the determination method of the two depth parameters z _ near _ new and z _ far _ new of the sub-block image is one of the following methods:
(1) decoding the code stream to obtain a nearest depth value z _ near of the source view image and a farthest depth value z _ far of the source view image, wherein z _ near _ new of the sub-block image is equal to z _ near, and z _ far _ new of the sub-block image is equal to z _ far;
(2) the z _ near _ new of the sub-block image is equal to the nearest depth value of the sub-block image, the z _ far _ new of the sub-block image is equal to the farthest depth value of the sub-block image, and the nearest depth value of the sub-block image and the farthest depth value of the sub-block image are determined by one of the following methods:
(1) directly decoding from the code stream to obtain the nearest depth value and the farthest depth value of the subblock image;
(2) in decoding the reconstructed depth image, the depth value of the pixel which is closest to the source viewpoint to which the sub-block belongs in all the pixels of the sub-block image is the closest depth value of the sub-block; the depth value of the pixel which is farthest away from the source viewpoint of the sub-block in all the pixels of the sub-block image is the farthest depth value of the sub-block;
using the camera parameters of the source viewpoint image and the camera parameters of the target image to four boundary vertices (a)j,bj) J is an integer of 0 to 3, as follows:
(a0,b0)=(xs,ys),
(a1,b1)=(xs,ys+height),
(a2,b2)=(xs+width,ys),
(a0,b0)=(xs+width,ys+height).
corresponding to eight space representative points (x)i,yi,zi) I is an integer of 0 to 7, as follows:
(x0,y0,z0)=(a1,b1,z_far_new),
(x1,y1,z1)=(a3,b3,z_far_new),
2,y2,z2)=(a2,b2,z_far_new),
(x3,y3,z3)=(a3,b3,z_far_new),
(x4,y4,z4)=(a1,b1,z_near_new),
(x5,y5,z5)=(a2,b2,z_near_new),
(x6,y6,z6)=(a0,b0,z_near_new),
(x7,y7,z7)=(a3,b3,z_near_new),
projecting the eight space representative points to the position of the target image by using the camera parameter relation between the source viewpoint image and the target image, wherein the coordinate projected to the position of the target image is (xo)i,yoi) I is an integer of 0 to 7,
according to the position coordinates (xo) of eight representative points in the obtained target imagei,yoi) Judging whether the sub-block image and the target image have the region overlapping or not in advance, and judging whether the sub-block image and the target image have the region overlapping if any one of the following conditions is met:
(1) the eight representative points projected to the target image are all on the left side of the left boundary of the target image;
(2) the eight representative points projected to the target image are all positioned on the right side of the right boundary of the target image;
(3) the eight representative points projected to the target image are all positioned on the upper side of the upper boundary of the target image;
(4) the eight representative points projected to the target image are all positioned below the lower boundary of the target image;
specifically, when the origin coordinate (0,0) of the position coordinate of the target image is the upper left corner of the image, if any one of the following conditions is satisfied, it is predetermined that the sub-block image overlaps with the target image existing region:
(1)xomax<0;
(2)xomin>width_o;
(3)yomax<0;
(4)yomin>height_o;
wherein xominIs xoiMinimum value of (5), xomaxIs xoiThe maximum value of (1), yominIs yoiYo is the minimum value ofmaxIs yoiWherein i is an integer from 0 to 7, and when the origin coordinate of the position coordinate of the target image is at other positions of the non-upper left corner of the image, a decision formula can be obtained through similar derivation;
and if the areas are overlapped, rendering by using the pixels in the sub-block images to obtain a part of target images, otherwise, rendering without using the pixels of the sub-block images to obtain a part of target images.
Example 16
The device for screening the subblock images specifically comprises the following steps:
the subblock image information extraction module inputs a multi-view subblock spliced image code stream and outputs at least one subblock image information, and the information comprises: width information width and height information of the sub-block images, position information of the sub-block images in the multi-viewpoint sub-block stitched image, position information of the sub-block images in the source viewpoint image and camera parameters of the source viewpoint image to which the sub-block images belong, wherein the camera parameters comprise camera orientation, camera position coordinates and camera internal parameters, the position information of the sub-block images in the multi-viewpoint sub-block stitched image and the position information of the source viewpoint image can be determined by any pixel position of the sub-block images and positions corresponding to the multi-viewpoint sub-block stitched image and the source viewpoint image, such as position point information of pixels at 1 position of upper left, upper right, lower left, lower right and upper left of the sub-block images in the multi-viewpoint sub-block stitched image;
sub-block image judgment module for inputting sub-block image information and sub-block image informationAnd outputting the relevant information of the target image and the representative depth parameter of the sub-block image to obtain a partial target image by using the pixels in the sub-block image for rendering. For a target image, acquiring relevant information thereof, including: width information width _ o, height information height _ o, and camera parameters; obtaining two depth parameters of the sub-block image: z _ near _ new and z _ far _ new, wherein the relationship between the two parameter values satisfies that z _ near _ new is less than or equal to z _ far _ new; using the camera parameters of the source viewpoint image and the camera parameters of the target image to four boundary vertices (a)j,bj) J is an integer of 0 to 3, as follows:
(a0,b0)=(xs,ys),
(a1,b1)=(xs,ys+height),
(a2,b2)=(xs+width,ys),
(a0,b0)=(xs+width,ys+height).
corresponding to eight space representative points (x)i,yi,zi) I is an integer of 0 to 7, as follows:
(x0,y0,z0)=(a1,b1,z_far_new),
(x1,y1,z1)=(a3,b3,z_far_new),
(x2,y2,z2)=(a2,b2,z_far_new),
(x3,y3,z3)=(a3,b3,z_far_new),
(x4,y4,z4)=(a1,b1,z_near_new),
(x5,y5,z5)=(a2,b2,z_near_new),
(x6,y6,z6)=(a0,b0,z_near_new),
(x7,y7,z7)=(a3,b3,z_near_new),
projecting the eight space representative points to the position of the target image by using the camera parameter relation between the source viewpoint image and the target image, wherein the coordinate projected to the position of the target image is (xo)i,yoi) I is an integer of 0 to 7;
according to the position coordinates (xo) of eight representative points in the obtained target imagei,yoi) Whether the sub-block image and the target image have region overlapping is judged in advance; and if the areas are overlapped, rendering by using the pixels in the sub-block images to obtain a part of target images, otherwise, rendering without using the pixels of the sub-block images to obtain a part of target images.
Example 17
The device for screening the subblock images specifically comprises the following steps:
the subblock image information extraction module inputs a multi-view subblock spliced image code stream and outputs at least one subblock image information, and the information comprises: width information width and height information of the sub-block images, position information of the sub-block images in the multi-viewpoint sub-block stitched image, position information of the sub-block images in the source viewpoint image and camera parameters of the source viewpoint image to which the sub-block images belong, wherein the camera parameters comprise camera orientation, camera position coordinates and camera internal parameters, the position information of the sub-block images in the multi-viewpoint sub-block stitched image and the position information of the source viewpoint image can be determined by any pixel position of the sub-block images and positions corresponding to the multi-viewpoint sub-block stitched image and the source viewpoint image, such as position point information of pixels at 1 position of upper left, upper right, lower left, lower right and upper left of the sub-block images in the multi-viewpoint sub-block stitched image;
and the subblock image judging module inputs the information of the subblock image, the related information of the target image and the representative depth parameter of the subblock image and outputs the information of the subblock image, wherein the representative depth parameter is obtained by rendering pixels in the subblock image. For a target image, acquiring relevant information thereof, including: width information width _ o, height information height _ o, and camera parameters; obtaining two depth parameters of the sub-block image: z _ near _ new and z _ far _ new, wherein the relationship between the two parameter values satisfies z _ near _ new ≦ z _ far _ new; using the camera parameters of the source viewpoint image and the camera parameters of the target image to four boundary vertices (a)j,bj) J is an integer of 0 to 3, as follows:
(a0,b0)=(xs,ys),
(a1,b1)=(xs,ys+height),
(a2,b2)=(xs+width,ys),
(a0,b0)=(xs+width,ys+height).
corresponding to eight space representative points (x)i,yi,zi) I is an integer of 0 to 7, as follows:
(x0,y0,z0)=(a1,b1,z_far_new),
(x1,y1,z1)=(a3,b3,z_far_new),
(x2,y2,z2)=(a2,b2,z_far_new),
(x3,y3,z3)=(a3,b3,z_far_new),
(x4,y4,z4)=(a1,b1,z_near_new),
(x5,y5,z5)=(a2,b2,z_near_new),
(x6,y6,z6)=(a0,b0,z_near_new),
(x7,y7,z7)=(a3,b3,z_near_new),
projecting the eight space representative points to the position of the target image by using the camera parameter relation between the source viewpoint image and the target image, wherein the coordinate projected to the position of the target image is (xo)i,yoi) I is an integer of 0 to 7,
according to the position coordinates (xo) of eight representative points in the obtained target imagei,yoi) Pre-determining the sub-block imageIf the target image has the region overlapping, if any one of the following conditions is met, the overlapping of the sub-block image and the target image region is predetermined:
(1) the eight representative points projected to the target image are all on the left side of the left boundary of the target image;
(2) the eight representative points projected to the target image are all positioned on the right side of the right boundary of the target image;
(3) the eight representative points projected to the target image are all positioned on the upper side of the upper boundary of the target image;
(4) the eight representative points projected to the target image are all positioned below the lower boundary of the target image;
specifically, when the origin coordinate (0,0) of the position coordinate of the target image is the upper left corner of the image, if any one of the following conditions is satisfied, it is predetermined that the sub-block image overlaps with the target image existing region:
(1)xomax<0;
(2)xomin>width_o;
(3)yomax<0;
(4)yomin>height_o,
wherein xominIs xoiMinimum value of (5), xomaxIs xoiThe maximum value of (1), yominIs yoiYo is the minimum value ofmaxIs yoiWherein i is an integer from 0 to 7, and when the origin coordinate of the position coordinate of the target image is at other positions of the non-upper left corner of the image, a decision formula can be obtained through similar derivation;
and if the areas are overlapped, rendering by using the pixels in the sub-block images to obtain a part of target images, otherwise, rendering without using the pixels of the sub-block images to obtain a part of target images.
Example 18
The device for screening the subblock images specifically comprises the following steps:
the subblock image information extraction module inputs a multi-view subblock spliced image code stream and outputs at least one subblock image information, and the information comprises: width information width and height information of the sub-block images, position information of the sub-block images in the multi-viewpoint sub-block stitched image, position information of the sub-block images in the source viewpoint image and camera parameters of the source viewpoint image to which the sub-block images belong, wherein the camera parameters comprise camera orientation, camera position coordinates and camera internal parameters, the position information of the sub-block images in the multi-viewpoint sub-block stitched image and the position information of the source viewpoint image can be determined by any pixel position of the sub-block images and positions corresponding to the multi-viewpoint sub-block stitched image and the source viewpoint image, such as position point information of pixels at 1 position of upper left, upper right, lower left, lower right and upper left of the sub-block images in the multi-viewpoint sub-block stitched image;
and the subblock image judging module inputs the information of the subblock image, the related information of the target image and the representative depth parameter of the subblock image and outputs the information of the subblock image, wherein the representative depth parameter is obtained by rendering pixels in the subblock image. For a target image, acquiring relevant information thereof, including: width information width _ o, height information height _ o, and camera parameters; obtaining two depth parameters of the sub-block image: z _ near _ new and z _ far _ new, wherein the relationship between the two parameter values satisfies that z _ near _ new is less than or equal to z _ far _ new, the determination method of the two depth parameters z _ near _ new and z _ far _ new of the sub-block image is one of the following methods:
(1) decoding the code stream to obtain a nearest depth value z _ near of the source view image and a farthest depth value z _ far of the source view image, wherein z _ near _ new of the sub-block image is equal to z _ near, and z _ far _ new of the sub-block image is equal to z _ far;
(2) z _ near _ new of the sub-block image is equal to the nearest depth value of the sub-block image, and z _ far _ new of the sub-block image is equal to the farthest depth value of the sub-block image;
using the camera parameters of the source viewpoint image and the camera parameters of the target image to four boundary vertices (a)j,bj) J is an integer of 0 to 3, as follows:
(a0,b0)=(xs,ys),
(a1,b1)=(xs,ys+height),
(a2,b2)=(xs+width,ys),
(a0,b0)=(xs+width,ys+height).
corresponding to eight space representative points (x)i,yi,zi) I is an integer of 0 to 7, as follows:
(x0,y0,z0)=(a1,b1,z_far_new),
(x1,y1,z1)=(a3,b3,z_far_new),
(x2,y2,z2)=(a2,b2,z_far_new),
(x3,y3,z3)=(a3,b3,z_far_new),
(x4,y4,z4)=(a1,b1,z_near_new),
(x5,y5,z5)=(a2,b2,z_near_new),
(x6,y6,z6)=(a0,b0,z_near_new),
(x7,y7,z7)=(a3,b3,z_near_new),
projecting the eight space representative points to the position of the target image by using the camera parameter relation between the source viewpoint image and the target image, wherein the coordinate projected to the position of the target image is (xo)i,yoi) I is an integer of 0 to 7; according to the position coordinates (xo) of eight representative points in the obtained target imagei,yoi) Whether the sub-block image and the target image have region overlapping is judged in advance; and if the areas are overlapped, rendering by using the pixels in the sub-block images to obtain a part of target images, otherwise, rendering without using the pixels of the sub-block images to obtain a part of target images.
Example 19
The device for screening the subblock images specifically comprises the following steps:
the subblock image information extraction module inputs a multi-view subblock spliced image code stream and outputs at least one subblock image information, and the information comprises: width information width and height information of the sub-block images, position information of the sub-block images in the multi-viewpoint sub-block stitched image, position information of the sub-block images in the source viewpoint image and camera parameters of the source viewpoint image to which the sub-block images belong, wherein the camera parameters comprise camera orientation, camera position coordinates and camera internal parameters, the position information of the sub-block images in the multi-viewpoint sub-block stitched image and the position information of the source viewpoint image can be determined by any pixel position of the sub-block images and positions corresponding to the multi-viewpoint sub-block stitched image and the source viewpoint image, such as position point information of pixels at 1 position of upper left, upper right, lower left, lower right and upper left of the sub-block images in the multi-viewpoint sub-block stitched image;
and the subblock image judging module inputs the information of the subblock image, the related information of the target image and the representative depth parameter of the subblock image and outputs the information of the subblock image, wherein the representative depth parameter is obtained by rendering pixels in the subblock image. For a target image, acquiring relevant information thereof, including: width information width _ o, height information height _ o, and camera parameters; obtaining two depth parameters of the sub-block image: z _ near _ new and z _ far _ new, wherein the relationship between the two parameter values satisfies that z _ near _ new is less than or equal to z _ far _ new, the determination method of the two depth parameters z _ near _ new and z _ far _ new of the sub-block image is one of the following methods:
(1) decoding the code stream to obtain a nearest depth value z _ near of the source view image and a farthest depth value z _ far of the source view image, wherein z _ near _ new of the sub-block image is equal to z _ near, and z _ far _ new of the sub-block image is equal to z _ far;
(2) the z _ near _ new of the sub-block image is equal to the nearest depth value of the sub-block image, the z _ far _ new of the sub-block image is equal to the farthest depth value of the sub-block image, and the nearest depth value of the sub-block image and the farthest depth value of the sub-block image are determined by one of the following methods:
(1) directly decoding from the code stream to obtain the nearest depth value and the farthest depth value of the subblock image;
(2) in decoding the reconstructed depth image, the depth value of the pixel which is closest to the source viewpoint to which the sub-block belongs in all the pixels of the sub-block image is the closest depth value of the sub-block; the depth value of the pixel which is farthest away from the source viewpoint of the sub-block in all the pixels of the sub-block image is the farthest depth value of the sub-block;
using the camera parameters of the source viewpoint image and the camera parameters of the target image to four boundary vertices (a)j,bj) J is an integer of 0 to 3, as follows:
(a0,b0)=(xs,ys),
(a1,b1)=(xs,ys+height),
(a2,b2)=(xs+width,ys),
(a0,b0)=(xs+width,ys+height).
corresponding to eight space representative points (x)i,yi,zi) I is an integer of 0 to 7, as follows:
(x0,y0,z0)=(a1,b1,z_far_new),
(x1,y1,z1)=(a3,b3,z_far_new),
(x2,y2,z2)=(a2,b2,z_far_new),
(x3,y3,z3)=(a3,b3,z_far_new),
4,y4,z4)=(a1,b1,z_near_new),
(x5,y5,z5)=(a2,b2,z_near_new),
(x6,y6,z6)=(a0,b0,z_near_new),
(x7,y7,z7)=(a3,b3,z_near_new),
projecting the eight space representative points to the position of the target image by using the camera parameter relation between the source viewpoint image and the target imageThe coordinates of the target image position are (xo)i,yoi) I is an integer of 0 to 7; according to the position coordinates (xo) of eight representative points in the obtained target imagei,yoi) Whether the sub-block image and the target image have region overlapping is judged in advance; and if the areas are overlapped, rendering by using the pixels in the sub-block images to obtain a part of target images, otherwise, rendering without using the pixels of the sub-block images to obtain a part of target images.
Example 20
The device for screening the subblock images specifically comprises the following steps:
the subblock image information extraction module inputs a multi-view subblock spliced image code stream and outputs at least one subblock image information, and the information comprises: width information width and height information of the sub-block images, position information of the sub-block images in the multi-viewpoint sub-block stitched image, position information of the sub-block images in the source viewpoint image and camera parameters of the source viewpoint image to which the sub-block images belong, wherein the camera parameters comprise camera orientation, camera position coordinates and camera internal parameters, the position information of the sub-block images in the multi-viewpoint sub-block stitched image and the position information of the source viewpoint image can be determined by any pixel position of the sub-block images and positions corresponding to the multi-viewpoint sub-block stitched image and the source viewpoint image, such as position point information of pixels at 1 position of upper left, upper right, lower left, lower right and upper left of the sub-block images in the multi-viewpoint sub-block stitched image;
and the subblock image judging module inputs the information of the subblock image, the related information of the target image and the representative depth parameter of the subblock image and outputs the information of the subblock image, wherein the representative depth parameter is obtained by rendering pixels in the subblock image. For a target image, acquiring relevant information thereof, including: width information width _ o, height information height _ o, and camera parameters; obtaining two depth parameters of the sub-block image: z _ near _ new and z _ far _ new, wherein the relationship between the two parameter values satisfies that z _ near _ new is less than or equal to z _ far _ new, the determination method of the two depth parameters z _ near _ new and z _ far _ new of the sub-block image is one of the following methods:
(1) decoding the code stream to obtain a nearest depth value z _ near of the source view image and a farthest depth value z _ far of the source view image, wherein z _ near _ new of the sub-block image is equal to z _ near, and z _ far _ new of the sub-block image is equal to z _ far;
(2) the z _ near _ new of the sub-block image is equal to the nearest depth value of the sub-block image, the z _ far _ new of the sub-block image is equal to the farthest depth value of the sub-block image, and the nearest depth value of the sub-block image and the farthest depth value of the sub-block image are determined by one of the following methods:
(1) directly decoding from the code stream to obtain the nearest depth value and the farthest depth value of the subblock image;
(2) in decoding the reconstructed depth image, the depth value of the pixel which is closest to the source viewpoint to which the sub-block belongs in all the pixels of the sub-block image is the closest depth value of the sub-block; the depth value of the pixel which is farthest away from the source viewpoint of the sub-block in all the pixels of the sub-block image is the farthest depth value of the sub-block;
using the camera parameters of the source viewpoint image and the camera parameters of the target image to four boundary vertices (a)j,bj) J is an integer of 0 to 3, as follows:
(a0,b0)=(xs,ys),
(a1,b1)=(xs,ys+height),
(a2,b2)=(xs+width,ys),
(a0,b0)=(xs+width,ys+height).
corresponding to eight space representative points (x)i,yi,zi) I is an integer of 0 to 7, as follows:
(x0,y0,z0)=(a1,b1,z_far_new),
(x1,y1,z1)=(a3,b3,z_far_new),
(x2,y2,z2)=(a2,b2,z_far_new),
(x3,y3,z3)=(a3,b3,z_far_new),
(x4,y4,z4)=(a1,b1,z_near_new),
(x5,y5,z5)=(a2,b2,z_near_new),
(x6,y6,z6)=(a0,b0,z_near_new),
(x7,y7,z7)=(a3,b3,z_near_new),
projecting the eight space representative points to the position of the target image by using the camera parameter relation between the source viewpoint image and the target image, wherein the coordinate projected to the position of the target image is (xo)i,yoi) I is an integer of 0 to 7; according to the position coordinates (xo) of eight representative points in the obtained target imagei,yoi) If the sub-block image and the target image are pre-judged to have the region overlapping, if any one of the following conditions is met, the sub-block image and the target image are pre-judged to have the region overlapping:
(1) the eight representative points projected to the target image are all on the left side of the left boundary of the target image;
(2) the eight representative points projected to the target image are all positioned on the right side of the right boundary of the target image;
(3) the eight representative points projected to the target image are all positioned on the upper side of the upper boundary of the target image;
(4) the eight representative points projected to the target image are all positioned below the lower boundary of the target image;
specifically, when the origin coordinate (0,0) of the position coordinate of the target image is the upper left corner of the image, if any one of the following conditions is satisfied, it is predetermined that the sub-block image overlaps with the target image existing region:
(1)xomax<0;
(2)xomin>width_o;
(3)yomax<0;
(4)yomin>height_o;
wherein xominIs xoiMinimum of (1)Value xomaxIs xoiThe maximum value of (1), yominIs yoiYo is the minimum value ofmaxIs yoiWherein i is an integer from 0 to 7, and when the origin coordinate of the position coordinate of the target image is at other positions of the non-upper left corner of the image, a decision formula can be obtained through similar derivation;
and if the areas are overlapped, rendering by using the pixels in the sub-block images to obtain a part of target images, otherwise, rendering without using the pixels of the sub-block images to obtain a part of target images.
Example 21
A method of screening processing units, comprising:
for at least one processing unit in the multi-view sub-block stitched image, calculating the width information width and height information height of the processing unit according to the width W, such as 512, and the height H, such as 256, of the information transmission unit corresponding to the processing unit in the code stream, wherein the method comprises the following steps:
width=min{w0,W-Δw}=min{64,512-128}=64;
height=min{h0,H-Δh}=min{32,256-192}=32;
wherein, { w0=64,h032 is a default width and height, respectively, of a processing unit, { Δ w ═ 128, Δ h ═ 192} is a position offset, respectively, of the processing unit relative to the information transfer unit;
according to the position information (xp1, yp1) of the upper left corner pixel of the information transmission unit in the multi-view sub-block stitched image, such as (256,64), the position information (xs1, ys1) of the upper left corner pixel of the information transmission unit in the multi-view sub-block stitched image, such as (128,32) and the offset { Δ w, Δ h } {128,192} of the position of the processing unit relative to the position of the information transmission unit, the position information (xp, yp) of the processing unit in the multi-view sub-block stitched image, the position information (xs, ys) of the processing unit in the source view image are calculated, and the calculation method is as follows:
xp=xp1+Δw=256+128=374;
yp=yp1+Δh=64+192=256;
xs=xs1+Δw=128+128=256;
ys=ys1+Δh=32+192=224;
acquiring camera parameters and depth parameters of a source viewpoint image to which a processing unit belongs from a code stream, wherein the camera parameters comprise camera orientation, camera position coordinates and camera internal parameters, and the depth parameters comprise the nearest depth z _ near of the image, such as 0.1 meter, and the farthest depth z _ far, such as 0.2 meter;
acquiring a multi-viewpoint sub-block spliced image from the code stream, and extracting a processing unit with width and height from the (xp, yp) position of the multi-viewpoint sub-block spliced image according to the information of the processing unit, wherein the processing unit corresponds to a processing unit with width and height from the (xs, ys) position in the source viewpoint image;
for a target image, acquiring relevant information thereof, including: width information width _ o is 2048, for example, height information height _ o is 2048, for example, camera parameters of the target image;
for the processing unit, determining the depth coverage of four boundary vertices: representative depth parameters including processing unit: z _ near _ new and z _ far _ new, such as 0.12 meters, and z _ far _ new such as 0.19 meters, wherein the z _ near and most distant depth values in the processing unit depth coverage and source view image depth parameter z _ near and most distant depth values z _ far have a relationship of z _ near < (z _ near _ new < (z _ far) > -z _ far < (z _ far) >,
each representative point of the sub-block image has two spatial representative points in combination with the nearest depth value z _ near _ new and the farthest depth value z _ far _ new of the depth coverage, and thus four boundary vertices (a)j,bj) J is an integer of 0 to 3, as follows:
(a0,b0)=(xs,ys)=(256,224),
(a1,b1)=(xs+width,ys)=(320,224),
(a2,b2)=(xs,ys+height)=(256,256),
(a3,b3)=(xs+width,ys+height)=(320,256).
corresponding to eight space representative points (x)i,yi,zi) I is an integer of 0 to 7, as follows:
(x0,y0,z0)=(a0,b0,z_near_new)=(256,224,0.12),
(x1,y1,z1)=(a1,b1,z_near_new)=(320,224,0.12),
(x2,y2,z2)=(a2,b2,z_near_new)=(256,256,0.12),
(x3,y3,z3)=(a3,b3,z_near_new)=(320,256,0.12),
(x4,y4,z4)=(a0,b0,z_far_new)=(256,224,0.19),
(x5,y5,z5)=(a1,b1,z_far_new)=(320,224,0.19),
(x6,y6,z6)=(a2,b2,z_far_new)=(256,256,0.19),
(x7,y7,z7)=(a3,b3,z_far_new)=(320,256,0.19),
the eight space representative points corresponding to the processing unit are projected to the coordinate position (xo) of the plane where the target viewpoint image is positionedi,yoi), xomin,xomaxAre respectively eight (xo)i,yoi) Minimum and maximum values of abscissa, yomin,yomaxAre respectively eight (xo)i,yoi) The minimum maximum value of the ordinate if any of the following conditions is satisfied:
(1)xomax<0
(2)xomix>width_o
(3)yomax<0
(4)yomin>height_o,
such as xomaxIs-2 or xomin2050 or yomaxEither-30 or yomin3000, it is determined that the processing unit does not overlap the target image in area, and the processing unit is not used for the subsequent rendering of the target image, otherwise the processing unit is used for the subsequent rendering of the target image.
Example 22
A method of screening processing units, comprising:
for at least one processing unit in the multi-view sub-block stitched image, calculating the width information width and height information height of the processing unit according to the width W, such as 300, and the height H, such as 220, of the information transmission unit corresponding to the processing unit in the code stream, wherein the method comprises the following steps:
width=min{w0,W-Δw}=min{64,300-256}=44;
height=min{h0,H-Δh}=min{32,220-192}=28;
wherein, { w0=64,h032 is a default width and height, respectively, of a processing unit, { Δ w ═ 256, Δ h ═ 192} is a position offset, respectively, of the processing unit with respect to the information transmission unit;
according to the position information (xp1, yp1) of the upper left corner pixel of the information transmission unit in the multi-view sub-block stitched image, such as (0,20), the position information (xs1, ys1) of the upper left corner pixel of the information transmission unit in the multi-view sub-block stitched image, such as (10,24) and the offset { Δ w, Δ h } {256,192} of the position of the processing unit relative to the position of the information transmission unit, the position information (xp, yp) of the processing unit in the multi-view sub-block stitched image, the position information (xs, ys) of the processing unit in the source view image are calculated, and the calculation method is as follows:
xp=xp1+Δw=0+256=256;
yp=yp1+Δh=20+192=212;
xs=xs1+Δw=10+256=266;
ys=ys1+Δh=24+192=216;
acquiring camera parameters and depth parameters of a source viewpoint image to which a processing unit belongs from a code stream, wherein the camera parameters comprise camera orientation, camera position coordinates and camera internal parameters, and the depth parameters comprise the nearest depth z _ near of the image, such as 0.1 meter, and the farthest depth z _ far, such as 0.2 meter;
acquiring a multi-viewpoint sub-block spliced image from the code stream, and extracting a processing unit with width and height from the (xp, yp) position of the multi-viewpoint sub-block spliced image according to the information of the processing unit, wherein the processing unit corresponds to a processing unit with width and height from the (xs, ys) position in the source viewpoint image;
for a target image, acquiring relevant information thereof, including: width information width _ o is 2048, for example, height information height _ o is 2048, for example, camera parameters of the target image;
for the processing unit, determining the depth coverage of four boundary vertices: representative depth parameters including processing unit: z _ near _ new and z _ far _ new, such as 0.12 meters, and z _ far _ new such as 0.19 meters, wherein the z _ near and most distant depth values in the processing unit depth coverage and source view image depth parameter z _ near and most distant depth values z _ far have a relationship of z _ near < (z _ near _ new < (z _ far) > -z _ far < (z _ far) >,
each representative point of the sub-block image has two spatial representative points in combination with the nearest depth value z _ near _ new and the farthest depth value z _ far _ new of the depth coverage, and thus four boundary vertices (a)j,bj) J is an integer of 0 to 3, as follows:
(a0,b0)=(xs,ys)=(266,216),
(a1,b1)=(xs+width,ys)=(310,216),
(a2,b2)=(xs,ys+height)=(266,244),
(a3,b3)=(xs+width,ys+height)=(310,244).
corresponding to eight space representative points (x)i,yi,zi) I is an integer of 0 to 7, as follows:
(x0,y0,z0)=(a0,b0,z_near_new)=(266,216,0.12),
(x1,y1,z1)=(a1,b1,z_near_new)=(310,216,0.12),
(x2,y2,z2)=(a2,b2,z_near_new)=(266,244,0.12),
(x3,y3,z3)=(a3,b3,z_near_new)=(310,244,0.12),
(x4,y4,z4)=(a0,b0,z_far_new)=(266,216,0.19),
(x5,y5,z5)=(a1,b1,z_far_new)=(310,216,0.19),
(x6,y6,z6)=(a2,b2,z_far_new)=(266,244,0.19),
(x7,y7,z7)=(a3,b3,z_far_new)=(310,244,0.19),
the eight space representative points corresponding to the processing unit are projected to the coordinate position (xo) of the plane where the target viewpoint image is positionedi,yoi), xomin,xomaxAre respectively eight (xo)i,yoi) Minimum and maximum values of abscissa, yomin,yomaxAre respectively eight (xo)i,yoi) The minimum maximum value of the ordinate if any of the following conditions is satisfied:
(1)xomax<0
(2)xomin>width_o
(3)yomax<0
(4)yomin>height_o,
such as xomaxIs-2 or xomin2050 or yomaxEither-30 or yomin3000, it is determined that the processing unit does not overlap the target image in area, and the processing unit is not used for the subsequent rendering of the target image, otherwise the processing unit is used for the subsequent rendering of the target image.
Example 23
A method of screening processing units, comprising:
for at least one processing unit in the multi-view sub-block stitched image, calculating the width information width and height information height of the processing unit according to the width W, such as 300, and the height H, such as 220, of the information transmission unit corresponding to the processing unit in the code stream, wherein the method comprises the following steps:
width=min{w0,W-Δw}=min{64,300-256}=44;
height=min{h0,H-Δh}=min{32,220-192}=28;
wherein, { w0=64,h032 is a default width and height, respectively, of a processing unit, { Δ w ═ 256, Δ h ═ 192} is a position offset, respectively, of the processing unit with respect to the information transmission unit;
according to the position information (xp1, yp1) of the upper left corner pixel of the information transmission unit in the multi-view sub-block stitched image, such as (0,20), the position information (xs1, ys1) of the upper left corner pixel of the information transmission unit in the multi-view sub-block stitched image, such as (10,24) and the offset { Δ w, Δ h } {256,192} of the position of the processing unit relative to the position of the information transmission unit, the position information (xp, yp) of the processing unit in the multi-view sub-block stitched image, the position information (xs, ys) of the processing unit in the source view image are calculated, and the calculation method is as follows:
xp=xp1+Δw=0+256=256;
yp=yp1+Δh=20+192=212;
xs=xs1+Δw=10+256=266;
ys=ys1+Δh=24+192=216;
acquiring camera parameters and depth parameters of a source viewpoint image to which a processing unit belongs from a code stream, wherein the camera parameters comprise camera orientation, camera position coordinates and camera internal parameters, and the depth parameters comprise the nearest depth z _ near of the image, such as 0.1 meter, and the farthest depth z _ far, such as 0.2 meter;
acquiring a multi-viewpoint sub-block spliced image from the code stream, and extracting a processing unit with width and height from the (xp, yp) position of the multi-viewpoint sub-block spliced image according to the information of the processing unit, wherein the processing unit corresponds to a processing unit with width and height from the (xs, ys) position in the source viewpoint image;
for a target image, acquiring relevant information thereof, including: width information width _ o is 2048, for example, height information height _ o is 2048, for example, camera parameters of the target image;
for the processing unit, determining the depth coverage of four boundary vertices: representative depth parameters including processing unit: z _ near _ new and z _ far _ new, such as 0.12 meters, and z _ far _ new such as 0.19 meters, wherein the z _ near and most distant depth values in the processing unit depth coverage and source view image depth parameter z _ near and most distant depth values z _ far have a relationship of z _ near < (z _ near _ new < (z _ far) > -z _ far < (z _ far) >,
each representative point of the sub-block image has two spatial representative points in combination with the nearest depth value z _ near _ new and the farthest depth value z _ far _ new of the depth coverage, and thus four boundary vertices (a)j,bj) J is an integer of 0 to 3, as follows:
(a0,b0)=(xs,ys)=(256,224),
(a1,b1)=(xs+width,ys)=(320,224),
(a2,b2)=(xs,ys+height)=(256,256),
(a3,b3)=(xs+width,ys+height)=(320,256).
corresponding to eight space representative points (x)i,yi,zi) I is an integer of 0 to 7, as follows:
0,y0,z0)=(a0,b0,z_near_new)=(256,224,0.12),
(x1,y1,z1)=(a1,b1,z_near_new)=(320,224,0.12),
(x2,y2,z2)=(a2,b2,z_near_new)=(256,256,0.12),
3,y3,z3)=(a3,b3,z_near_new)=(320,256,0.12),
(x4,y4,z4)=(a0,b0,z_far_new)=(256,224,0.19),
(x5,y5,z5)=(a1,b1,z_far_new)=(320,224,0.19),
(x6,y6,z6)=(a2,b2,z_far_new)=(256,256,0.19),
(x7,y7,z7)=(a3,b3,z_far_new)=(320,256,0.19),
the eight space representative points corresponding to the processing unit are projected to the coordinate position (xo) of the plane where the target viewpoint image is positionedi,yoi), xomin,xomaxAre respectively eight (xo)i,yoi) Minimum and maximum values of abscissa, yomin,yomaxAre respectively eight (xo)i,yoi) The minimum maximum value of the ordinate if any of the following conditions is satisfied:
(1)xomax<0
(2)xomin>width_o
(3)yomax<0
(4)yomin>height_o,
such as xomax50 or xomin10 or yomin5 or yominAnd 512, judging that the processing unit has the area overlapping with the target image, and using the processing unit for subsequently rendering the target image.
Example 24
A method of screening processing units, comprising:
as shown in fig. 5, for two processing units in the multi-view sub-block stitched image, according to the width W of the information transmission unit corresponding to the two processing units in the code stream being 300 for example and the height H being 220 for example, the width information width _ a and the height information height _ a of the processing unit a and the width information width _ B and the height information height _ B of the processing unit B are respectively calculated as follows:
width_A=min{w0,W-Δw_A}=min{64,300-128}=64;
height_A=min{h0,H-Δh_A}=min{32,220-160}=32;
width_B=min{w0,W-Δw_B}=min{64,300-256}=44;
height_B=min{h0,H-Δh_B}=min{32,220-192}=28;
wherein, { w0=64,h032} are default widths and heights of the processing units a and B, respectively, { Δ w _ a ═ 128, Δ h _ a ═ 160} are position offsets of the processing unit a with respect to the information transmission unit, respectively, { Δ w _ B ═ 256, Δ h _ B ═ 192} are position offsets of the processing unit B with respect to the information transmission unit, respectively;
calculating position information (xp _ a, yp _ a) of the processing unit a in the multi-view sub-block stitched image, position information (xs _ a, ys _ a) of the processing unit B in the multi-view sub-block stitched image, based on position information (xp1, yp1) of an upper left corner pixel of the information transfer unit, such as (0,20), position information (xs1, ys1) of the upper left corner pixel of the information transfer unit in the multi-view sub-block stitched image, such as (10,24), an offset { Δ w _ a, Δ h _ a } {128,160} of the processing unit a position with respect to the information transfer unit position, and an offset { Δ w _ B, Δ h _ B } {256,192} of the processing unit B position with respect to the information transfer unit position, position information (xp _ a, yp _ a) of the processing unit a in the multi-view sub-block stitched image, position information (xp _ B, yp _ B), position information (xs _ B, ys _ B) of the processing unit B in the source view image, the calculation method is as follows:
xp_A=xp1+Δw_A=0+128=128;
yp_A=yp1+Δh_A=20+160=180;
xs_A=xs1+Δw_A=10+128=138;
ys_A=ys1+Δh_A=24+160=184;
xp_B=xp1+Δw_B=0+256=56;
yp_B=yp1+Δh_B=20+192=212;
xs_B=xs1+Δw_B=10+256=266;
ys_B=ys1+Δh_B=24+192=216;
acquiring camera parameters and depth parameters of a source viewpoint image to which the processing units A and B belong together from the code stream, wherein the camera parameters comprise camera orientation, camera position coordinates and camera internal parameters, and the depth parameters comprise the nearest depth z _ near of the image, such as 0.1 meter, and the farthest depth z _ far, such as 0.2 meter;
acquiring a multi-viewpoint sub-block spliced image from a code stream, and extracting a processing unit with width _ A and height _ A from the (xp _ A, yp _ A) position of the multi-viewpoint sub-block spliced image according to the information of the processing unit A, wherein the processing unit is a processing unit A which corresponds to the source viewpoint image, starts from the position (xs _ A, ys _ A), has the width _ A and the height _ A;
acquiring a multi-viewpoint sub-block spliced image from the code stream, and extracting a processing unit with width _ B and height _ B from the (xp _ B, yp _ B) position of the multi-viewpoint sub-block spliced image according to the information of the processing unit B, wherein the processing unit is a processing unit B which corresponds to the source viewpoint image, starts from the position (xs _ B, ys _ B), has the width _ B and the height _ B;
for a target image, acquiring relevant information thereof, including: width information width _ o is 2048, for example, height information height _ o is 2048, for example, camera parameters of the target image;
for the processing unit A, determining the depth coverage range of four boundary vertexes: representative depth parameters including processing unit: z _ near _ new and z _ far _ new, such as 0.12 meters, and z _ far _ new such as 0.19 meters, wherein the z _ near and most distant depth values in the processing unit depth coverage and source view image depth parameters z _ near and z _ far relationships are z _ near<=z_near_new<=z_far_new<Each representative point of the processing unit has two spatial representative points in combination with the nearest depth value z _ near _ new and the farthest depth value z _ far _ new of the depth coverage, so that four boundary vertices correspond to eight spatial representative points (a _ x _ new)i,A_yi,A_zi) I is an integer of 0 to 7Projecting the eight space representative points to the target image position by using the camera parameter relation of the source viewpoint image and the target image, wherein the coordinate projected to the target image position is (A _ xo)i,A_yoi) And i is an integer from 0 to 7, wherein the origin coordinate (0,0) of the position coordinate of the target image is the upper left corner of the image, and the pre-decision processing unit is used for judging whether to generate the target image or not, wherein the judgment conditions are as follows:
the eight space representative points corresponding to the processing unit are projected to the coordinate position (A _ xo) of the plane where the target viewpoint image is positionedi,A_yoi), A_xomin,A_xomaxAre respectively eight (A _ xo)i,A_yoi) Minimum maximum value of abscissa, A _ yomin,A_yomaxAre respectively eight (A _ xo)i,A_yoi) The minimum maximum value of the ordinate if any of the following conditions is satisfied:
(1)A_xomax<0
(2)A_xomin>width_o
(3)A_yomax<0
(4)A_yomin>height_o,
such as a xomax50 or A _ xomin10 or a _ yomaxEither 5 or a _ yominIf the target image is 512, judging that the processing unit A is overlapped with the target image in a region, and using the processing unit for subsequently rendering the target image;
for the processing unit B, determining the depth coverage range of four boundary vertexes: representative depth parameters including processing unit: z _ near _ new and z _ far _ new, such as 0.12 meters, and z _ far _ new such as 0.19 meters, wherein the z _ near and most distant depth values in the processing unit depth coverage and source view image depth parameters z _ near and z _ far relationships are z _ near<=z_near_new<=z_far_new<Each representative point of the processing unit has two spatial representative points in combination with the nearest depth value z _ near _ new and the farthest depth value z _ far _ new of the depth coverage, so that four boundary vertices correspond to eight spatial representative points (B _ x _ far)i,B_yi,B_zi) I is an integer of 0 to 7Projecting the eight space representative points to the target image position by using the camera parameter relation of the source viewpoint image and the target image, wherein the coordinate projected to the target image position is (B _ xo)i,B_yoi) And i is an integer from 0 to 7, wherein the origin coordinate (0,0) of the position coordinate of the target image is the upper left corner of the image, and the pre-decision processing unit is used for judging whether to generate the target image or not, wherein the judgment conditions are as follows:
the eight space representative points corresponding to the processing unit are projected to the coordinate position (B _ xo) of the plane where the target viewpoint image is positionedi,B_yoi), B_xomin,B_xomaxAre respectively eight (B _ xo)i,B_yoi) Minimum maximum value of abscissa, B _ yomin,B_yomaxAre respectively eight (B _ xo)i,B_yoi) The minimum maximum value of the ordinate if any of the following conditions is satisfied:
(1)B_xomax<0
(2)B_xomin>width_o
(3)B_yomax<0
(4)B_yomin>height_o,
such as B xomax2 or B xomin2058 or B _ yomaxEither-30 or B _ yomin3000, it is determined that processing unit B does not overlap the target image, and the processing unit is not used for subsequent rendering of the target image.
Example 25
A method of screening processing units, comprising:
for an information transmission unit in the multi-view sub-block stitched image, acquiring position information (xp1, yp1) of a width W of 300, a height H of 220 and an upper left corner pixel in the multi-view sub-block stitched image, such as (0,20), and position information (xs1, ys1) of the upper left corner pixel in the multi-view sub-block stitched image, such as (10,24) from a code stream;
the information transfer unit may be divided into a plurality of processing units, as shown in fig. 6, wherein for the processing unit a and the processing unit B, the width information width _ a and the height information height _ a of the processing unit a and the width information width _ B and the height information height _ B of the processing unit B are calculated respectively according to the width and the height H of the information transfer unit by the following method:
width_A=min{w0,W-Δw_A}=min{64,300-128}=64;
height_A=min{h0,H-Δh_A}=min{32,220-160}=32;
width_B=min{w0,W-Δw_B}=min{64,300-256}=44;
height_B=min{h0,H-Δh_B}=min{32,220-192}=28;
wherein, { w0=64,h032} are default widths and heights of the processing units a and B, respectively, { Δ w _ a ═ 128, Δ h _ a ═ 160} are position offsets of the processing unit a with respect to the information transmission unit, respectively, { Δ w _ B ═ 256, Δ h _ B ═ 192} are position offsets of the processing unit B with respect to the information transmission unit, respectively;
calculating position information (xp _ a, yp _ a) of the processing unit a in the multi-view sub-block stitched image, position information (xs _ a, ys _ a) of the processing unit a in the source view image, position information (xp _ B, ys _ B) of the processing unit B in the multi-view sub-block stitched image, and position information (xs _ B, ys _ B) of the processing unit B in the source view image, according to offsets { Δ w _ a, Δ h _ a } {128,160} of the processing unit a position with respect to the information transfer unit position and offsets { Δ w _ B, Δ h _ B } {256,192} of the processing unit B with respect to the information transfer unit position, as follows:
xp_A=xp1+Δw_A=0+128=128;
yp_A=yp1+Δh_A=20+160=180;
xs_A=xs1+Δw_A=10+128=138;
ys_A=ys1+Δh_A=24+160=184;
xp_B=xp1+Δw_B=0+256=56;
yp_B=yp1+Δh_B=20+192=212;
xs_B=xs1+Δw_B=10+256=266;
ys_B=ys1+Δh_B=24+192=216;
acquiring camera parameters and depth parameters of a source viewpoint image to which the processing units A and B belong together from the code stream, wherein the camera parameters comprise camera orientation, camera position coordinates and camera internal parameters, and the depth parameters comprise the nearest depth z _ near of the image, such as 0.1 meter, and the farthest depth z _ far, such as 0.2 meter;
acquiring a multi-viewpoint sub-block spliced image from a code stream, and extracting a processing unit with width _ A and height _ A from the (xp _ A, yp _ A) position of the multi-viewpoint sub-block spliced image according to the information of the processing unit A, wherein the processing unit is a processing unit A which corresponds to the source viewpoint image, starts from the position (xs _ A, ys _ A), has the width _ A and the height _ A;
acquiring a multi-viewpoint sub-block spliced image from the code stream, and extracting a processing unit with width _ B and height _ B from the (xp _ B, yp _ B) position of the multi-viewpoint sub-block spliced image according to the information of the processing unit B, wherein the processing unit is a processing unit B which corresponds to the source viewpoint image, starts from the position (xs _ B, ys _ B), has the width _ B and the height _ B;
for a target image, acquiring relevant information thereof, including: width information width _ o is 2048, for example, height information height _ o is 2048, for example, camera parameters of the target image;
for the processing unit A, determining the depth coverage range of four boundary vertexes: representative depth parameters including processing unit: z _ near _ new and z _ far _ new, such as 0.12 meters, and z _ far _ new such as 0.19 meters, wherein the z _ near and most distant depth values in the processing unit depth coverage and source view image depth parameters z _ near and z _ far relationships are z _ near<=z_near_new<=z_far_new<Each representative point of the processing unit has two spatial representative points in combination with the nearest depth value z _ near _ new and the farthest depth value z _ far _ new of the depth coverage, so that four boundary vertices correspond to eight spatial representative points (a _ x _ new)i,A_yi,A_zi) I is an integer of 0 to 7, and eight are calculated by using the relationship between the camera parameters of the source viewpoint image and the camera parameters of the target imageThe space representative points are projected to the position of a target image, and the coordinates projected to the position of the target image are (A _ xo)i,A_yoi) And i is an integer from 0 to 7, wherein the origin coordinate (0,0) of the position coordinate of the target image is the upper left corner of the image, and the pre-decision processing unit is used for judging whether to generate the target image or not, wherein the judgment conditions are as follows:
the eight space representative points corresponding to the processing unit are projected to the coordinate position (A _ xo) of the plane where the target viewpoint image is positionedi,A_yoi), A_xomin,A_xomaxAre respectively eight (A _ xo)i,A_yoi) Minimum maximum value of abscissa, A _ yomin,A_yomaxAre respectively eight (A _ xo)i,A_yoi) The minimum maximum value of the ordinate if any of the following conditions is satisfied:
(1)A_xomax<0
(2)A_xomin>width_o
(3)A_yomax<0
(4)A_yomin>height_o,
such as a xomax50 or A _ xomin10 or a _ yomaxEither 5 or a _ yominIf the target image is 512, judging that the processing unit A is overlapped with the target image in a region, and using the processing unit for subsequently rendering the target image;
for the processing unit B, determining the depth coverage range of four boundary vertexes: representative depth parameters including processing unit: z _ near _ new and z _ far _ new, such as 0.12 meters, and z _ far _ new such as 0.19 meters, wherein the z _ near and most distant depth values in the processing unit depth coverage and source view image depth parameters z _ near and z _ far relationships are z _ near<=z_near_new<=z_far_new<Each representative point of the processing unit has two spatial representative points in combination with the nearest depth value z _ near _ new and the farthest depth value z _ far _ new of the depth coverage, so that four boundary vertices correspond to eight spatial representative points (B _ x _ far)i,B_yi,B_zi) I is an integer of 0 to 7, and eight are calculated by using the relationship between the camera parameters of the source viewpoint image and the camera parameters of the target imageThe space representative points are projected to the position of a target image, and the coordinates projected to the position of the target image are (B _ xo)i,B_yoi) And i is an integer from 0 to 7, wherein the origin coordinate (0,0) of the position coordinate of the target image is the upper left corner of the image, and the pre-decision processing unit is used for judging whether to generate the target image or not, wherein the judgment conditions are as follows:
the eight space representative points corresponding to the processing unit are projected to the coordinate position (B _ xo) of the plane where the target viewpoint image is positionedi,B_yoi), B_xomin,B_xomaxAre respectively eight (B _ xo)i,B_yoi) Minimum maximum value of abscissa, B _ yomin,B_yomaxAre respectively eight (B _ xo)i,B_yoi) The minimum maximum value of the ordinate if any of the following conditions is satisfied:
(1)B_xomax<0
(2)B_xomin>width_o
(3)B_yomax<0
(4)B_yomin>height_o,
such as B xomax2 or B xomin2058 or B _ yomaxEither-30 or B _ yomin3000, it is determined that processing unit B does not overlap the target image, and the processing unit is not used for subsequent rendering of the target image.
Example 26
A method of screening processing units, comprising:
for an information transmission unit of C in the multi-view sub-block stitched image, acquiring position information (xp1, yp1) of a width W of 300, for example, a height H of 220, and an upper left corner pixel in the multi-view sub-block stitched image (xp1, yp1, for example, (0,20), and position information (xs1, ys1), for example, (10,24) of the upper left corner pixel in the multi-view sub-block stitched image from the code stream;
the information transmission unit can be divided into a plurality of processing units, wherein for the processing unit A and the processing unit B, the width information width _ A and the height information height _ A of the processing unit A and the width information width _ B and the height information height _ B of the processing unit B are respectively calculated according to the width and the height H of the information transmission unit by the following method:
width_A=min{w0,W-Δw_A}=min{64,300-128}=64;
height_A=min{h0,H-Δh_A}=min{32,220-160}=32;
width_B=min{w0,W-Δw_B}=min{64,300-256}=44;
height_B=min{h0,H-Δh_B}=min{32,220-192}=28;
wherein, { w0=64,h032} are default widths and heights of the processing units a and B, respectively, { Δ w _ a ═ 128, Δ h _ a ═ 160} are position offsets of the processing unit a with respect to the information transmission unit, respectively, { Δ w _ B ═ 256, Δ h _ B ═ 192} are position offsets of the processing unit B with respect to the information transmission unit, respectively;
calculating position information (xp _ a, yp _ a) of the processing unit a in the multi-view sub-block stitched image, position information (xs _ a, ys _ a) of the processing unit a in the source view image, position information (xp _ B, ys _ B) of the processing unit B in the multi-view sub-block stitched image, and position information (xs _ B, ys _ B) of the processing unit B in the source view image, according to offsets { Δ w _ a, Δ h _ a } {128,160} of the processing unit a position with respect to the information transfer unit position and offsets { Δ w _ B, Δ h _ B } {256,192} of the processing unit B with respect to the information transfer unit position, as follows:
xp_A=xp1+Δw_A=0+128=128;
yp_A=yp1+Δh_A=20+160=180;
xs_A=xs1+Δw_A=10+128=138;
ys_A=ys1+Δh_A=24+160=184;
xp_B=xp1+Δw_B=0+256=56;
yp_B=yp1+Δh_B=20+192=212;
xs_B=xs1+Δw_B=10+256=266;
ys_B=ys1+Δh_B=24+192=216;
acquiring camera parameters and depth parameters of a source viewpoint image to which the processing units A and B belong together from the code stream, wherein the camera parameters comprise camera orientation, camera position coordinates and camera internal parameters, and the depth parameters comprise the nearest depth z _ near of the image, such as 0.1 meter, and the farthest depth z _ far, such as 0.2 meter;
acquiring a multi-viewpoint sub-block spliced image from a code stream, and extracting a processing unit with width _ A and height _ A from the (xp _ A, yp _ A) position of the multi-viewpoint sub-block spliced image according to the information of the processing unit A, wherein the processing unit is a processing unit A which corresponds to the source viewpoint image, starts from the position (xs _ A, ys _ A), has the width _ A and the height _ A;
acquiring a multi-viewpoint sub-block spliced image from the code stream, and extracting a processing unit with width _ B and height _ B from the (xp _ B, yp _ B) position of the multi-viewpoint sub-block spliced image according to the information of the processing unit B, wherein the processing unit is a processing unit B which corresponds to the source viewpoint image, starts from the position (xs _ B, ys _ B), has the width _ B and the height _ B;
for a target image, acquiring relevant information thereof, including: width information width _ o is 2048, for example, height information height _ o is 2048, for example, camera parameters of the target image;
for the processing unit A, determining the depth coverage range of four boundary vertexes: representative depth parameters including processing unit: z _ near _ new and z _ far _ new, such as 0.12 meters, and z _ far _ new such as 0.19 meters, wherein the z _ near and most distant depth values in the processing unit depth coverage and source view image depth parameters z _ near and z _ far relationships are z _ near<=z_near_new<=z_far_new<Each representative point of the processing unit has two spatial representative points in combination with the nearest depth value z _ near _ new and the farthest depth value z _ far _ new of the depth coverage, so that four boundary vertices correspond to eight spatial representative points (a _ x _ new)i,A_yi,A_zi) I is an integer from 0 to 7, the eight spatial representative points are projected to the target image position by utilizing the camera parameter relation of the source viewpoint image and the target image, and the coordinate projected to the target image position is (A _ xo)i,A_yoi) And i is an integer from 0 to 7, wherein the origin coordinate (0,0) of the position coordinate of the target image is the upper left corner of the image, and the pre-decision processing unit is used for judging whether to generate the target image or not, wherein the judgment conditions are as follows:
the eight space representative points corresponding to the processing unit are projected to the coordinate position (A _ xo) of the plane where the target viewpoint image is positionedi,A_yoi), A_xomin,A_xomaxAre respectively eight (A _ xo)i,A_yoi) Minimum maximum value of abscissa, A _ yomin,A_yomaxAre respectively eight (A _ xo)i,A_yoi) The minimum maximum value of the ordinate if any of the following conditions is satisfied:
(1)A_xomax<0
(2)A_xomin>width_o
(3)A_yomax<0
(4)A_yomin>height_o,
such as a xomax50 or A _ xomin10 or a _ yomaxEither 5 or a _ yominIf the target image is 512, judging that the processing unit A is overlapped with the target image in a region, and using the processing unit for subsequently rendering the target image;
for the processing unit B, determining the depth coverage range of four boundary vertexes: representative depth parameters including processing unit: z _ near _ new and z _ far _ new, such as 0.12 meters, and z _ far _ new such as 0.19 meters, wherein the z _ near and most distant depth values in the processing unit depth coverage and source view image depth parameters z _ near and z _ far relationships are z _ near<=z_near_new<=z_far_new<Each representative point of the processing unit has two spatial representative points in combination with the nearest depth value z _ near _ new and the farthest depth value z _ far _ new of the depth coverage, so that four boundary vertices correspond to eight spatial representative points (B _ x _ far)i,B_yi,B_zi) I is an integer from 0 to 7, the eight spatial representative points are projected to the target image position by utilizing the camera parameter relation of the source viewpoint image and the target image, and the coordinate projected to the target image position is (B _ xo)i,B_yoi) And i is an integer from 0 to 7, wherein the origin coordinate (0,0) of the position coordinate of the target image is the upper left corner of the image, and the pre-decision processing unit is used for judging whether to generate the target image or not, wherein the judgment conditions are as follows:
the eight space representative points corresponding to the processing unit are projected to the coordinate position (B _ xo) of the plane where the target viewpoint image is positionedi,B_yoi), B_xomin,B_xomaxAre respectively eight (B _ xo)i,B_yoi) Minimum maximum value of abscissa, B _ yomin,B_yomaxAre respectively eight (B _ xo)i,B_yoi) The minimum maximum value of the ordinate if any of the following conditions is satisfied:
(1)B_xomax<0
(2)B_xomin>width_o
(3)B_yomax<0
(4)B_yomin>height_o,
such as B xomax2 or B xomin2058 or B _ yomaxEither-30 or B _ yomin3000, it is determined that the processing unit B has no area overlap with the target image, and the processing unit is not used for subsequent rendering of the target image;
for at least one sub-block image in the multi-viewpoint sub-block spliced image C, acquiring the information of the sub-block image from the code stream, wherein the information comprises the following steps: a width of 128, for example, a height of 64, position information (xp, yp) of an upper left pixel of the subblock image in the multi-view subblock stitched image of the multi-view subblock, 256,64, position information (xs, ys) of an upper left pixel of the subblock image in the source view image of (0,0), camera parameters and depth parameters of the source view image to which the subblock image belongs, wherein the camera parameters include a camera orientation, camera position coordinates, and camera intrinsic parameters, and the depth parameters include a nearest depth z _ near of the image of 0.1 meter, for example, and a farthest depth z _ far of 0.2 meter, for example;
acquiring a multi-viewpoint sub-block spliced image from the code stream, and extracting a sub-block image with width and height from the (xp, yp) position of the multi-viewpoint sub-block spliced image according to the information of the sub-block image, wherein the sub-block image corresponds to the sub-block image with width and height from the (xs, ys) position in the source viewpoint image;
for a target image, acquiring relevant information thereof, including: the width information width _ o is 2048, for example, the height information height _ o is 2048, for example, the camera parameters of the target image, and for the subblock image, the depth coverage of four boundary vertices is determined: representative depth parameters including sub-block images: z _ near _ new and z _ far _ new, such as 0.12 meters, and z _ far _ new such as 0.19 meters, wherein the z _ near and most distant depth values in the sub-block image depth coverage and source view image depth parameters z _ near and z _ far relationships are z _ near<=z_near_new<=z_far_new<Each representative point of the sub-block image has two spatial representative points in combination with the nearest depth value z _ near _ new and the farthest depth value z _ far _ new of the depth coverage, and thus four boundary vertices correspond to eight spatial representative points (x _ far)i,yi,zi) I is an integer from 0 to 7, the eight spatial representative points are projected to the target image position by utilizing the camera parameter relation of the source viewpoint image and the target image, and the coordinate projected to the target image position is (xo)i,yoi) And i is an integer from 0 to 7, wherein the origin coordinate (0,0) of the position coordinate of the target image is the upper left corner of the image, and whether the sub-block image is used for generating the target image is pre-judged, wherein the judgment conditions are as follows:
the eight space representative points corresponding to the sub-block images are projected to the coordinate position (xo) of the plane where the target viewpoint image is locatedi,yoi), xomin,xomaxAre respectively eight (xo)i,yoi) Minimum and maximum values of abscissa, yomin,yomaxAre respectively eight (xo)i,yoi) The minimum maximum value of the ordinate if any of the following conditions is satisfied:
(1)xomax<0
(2)xomin>width_o
(3)yomax<0
(4)yomin>height_o,
such as xomaxIs-2 or xomin2050 or yomaxEither-30 or yominWhen the sub-block image is determined to have no area overlap with the target image at 3000, the sub-block image is not used for the subsequent rendering target image, otherwise the sub-block image is used for the subsequent rendering target image.
Example 27
A method for screening subblock images specifically comprises the following steps:
for at least one sub-block image in the multi-viewpoint sub-block spliced image, acquiring the information of the sub-block image from the code stream, wherein the information comprises the following steps: a width of 128, for example, a height of 64, position information (xp, yp) of an upper left pixel of the subblock image in the multi-view subblock stitched image of the multi-view subblock, 256,64, position information (xs, ys) of an upper left pixel of the subblock image in the source view image of (0,0), camera parameters and depth parameters of the source view image to which the subblock image belongs, wherein the camera parameters include a camera orientation, camera position coordinates, and camera intrinsic parameters, and the depth parameters include a nearest depth z _ near of the image of 0.1 meter, for example, and a farthest depth z _ far of 0.2 meter, for example;
acquiring a multi-viewpoint sub-block spliced image from the code stream, and extracting a sub-block image with width and height from the (xp, yp) position of the multi-viewpoint sub-block spliced image according to the information of the sub-block image, wherein the sub-block image corresponds to the sub-block image with width and height from the (xs, ys) position in the source viewpoint image;
for a target image, acquiring relevant information thereof, including: the width information width _ o is 2048, for example, the height information height _ o is 2048, for example, the camera parameters of the target image, and for the subblock image, the depth coverage of four boundary vertices is determined: representative depth parameters including sub-block images: z _ near _ new and z _ far _ new, such as 0.13 m and z _ far _ new such as 0.19 m, wherein the relationship of the closest and farthest depth values of the sub-block image depth coverage and the closest depth value z _ near and farthest depth value z _ far in the source view image depth parameter is z _ near < (z _ near _ new) < ═ z _ far < (z _ far) < [ -z _ far), each representative point of the sub-block image combines the closest depth value z _ near _ new, the farthest depth value z _ far _ new, and the center depth value z _ mid is 0.5 (z _ near _ new + z _ far _ new) > 0.16 m, there are three spatial representative points.
Thus, four boundary vertices (a)j,bj) J is an integer of 0 to 3, as follows:
(a0,b0)=(xs,ys)=(0,0),
(a1,b1)=(xs+width,ys)=(128,0),
(a2,b2)=(xs,ys+height)=(0,64),
(a3,b3)=(xs+width,ys+height)=(128,64).
corresponding to 12 space representative points (x)i,yi,zi) I is an integer from 0 to 11, as follows:
(x0,y0,z0)=(a0,b0,z_near_new)=(0,0,0.12),
(x1,y1,z1)=(a1,b1,z_near_new)=(128,0,0.12),
2,y2,z2)=(a2,b2,z_near_new)=(0,64,0.12),
(x3,y3,z3)=(a3,b3,z_near_new)=(128,64,0.12),
4,y4,z4)=(a0,b0,z_far_new)=(0,0,0.19),
(x5,y5,z5)=(a1,b1,z_far_new)=(128,0,0.19),
(x6,y6,z6)=(a2,b2,z_far_new)=(0,64,0.19),
(x7,y7,z7)=(a3,b3,z_far_new)=(128,64,0.19),
(x8,y8,z8)=(a0,b0,z_mid)=(0,0,0.16),
(x9,y9,z9)=(a1,b1,z_mid)=(128,0,0.16),
(x10,y10,z10)=(a2,b2,z_mid)=(0,64,0.16),
(x11,y11,z11)=(a3,b3,z_mid)=(128,64,0.16),
projecting 12 space representative points to the position of the target image by using the camera parameter relation between the source viewpoint image and the target image, wherein the coordinate projected to the position of the target image is (xo)i,yoi) And i is an integer from 0 to 11, wherein the origin coordinate (0,0) of the position coordinate of the target image is the upper left corner of the image, and whether the sub-block image is used for generating the target image is pre-determined, wherein the determination conditions are as follows:
the 12 space representative points corresponding to the sub-block images are projected to the coordinate position (xo) of the plane where the target viewpoint image is locatedi,yoi), xomin,xomaxAre respectively 12 (xo)i,yoi) Minimum and maximum values of abscissa, yomin,yomaxAre respectively 2 (xo)i,yoi) The minimum maximum value of the ordinate if any of the following conditions is satisfied:
(1)xomax<0
(2)xomin>width_o
(3)yomax<0
(4)yomin>height_o,
such as xomaxIs-2 or xomin2050 or yomaxEither-30 or yominWhen the sub-block image is determined to have no area overlap with the target image at 3000, the sub-block image is not used for the subsequent rendering target image, otherwise the sub-block image is used for the subsequent rendering target image.

Claims (18)

1. A method for screening subblock images is characterized by comprising the following steps:
for at least one sub-block image in the multi-viewpoint sub-block spliced image, acquiring width information width and height information height of the sub-block image, position information of the sub-block image in the multi-viewpoint sub-block spliced image, position information of the sub-block image in a source viewpoint image and camera parameters of the source viewpoint image to which the sub-block image belongs from a code stream, wherein the camera parameters comprise camera orientation, camera position coordinates and camera internal parameters;
acquiring width information width _ o, height information height _ o and camera parameters of a target image;
obtaining two depth parameters of the sub-block image: z _ near _ new and z _ far _ new, wherein z _ near _ new is less than or equal to z _ far _ new;
combining four boundary vertexes of the sub-block image with z _ near _ new and z _ far _ new to obtain N space representative points (x) by using the camera parameters of the source viewpoint image and the camera parameters of the target imagei,yi,zi) Projecting to obtain N representative points in the target image; wherein N is the number of space representative points, and i is an integer from 0 to N-1;
according to the position coordinates (xo) of N representative points in the obtained target imagei,yoi) Whether the sub-block image and the target image have region overlapping is judged in advance;
and if the areas are overlapped, rendering by using pixels in the sub-block images to obtain partial target images, and otherwise, rendering by not using the sub-block images.
2. The method of claim 1, wherein if any one of the following conditions is satisfied, it is predetermined that the sub-block image overlaps with the target image:
(1) the N representative points projected to the target image are all positioned on the left side of the left boundary of the target image;
(2) the N representative points projected to the target image are all positioned on the right side of the right boundary of the target image;
(3) the N representative points projected to the target image are all positioned on the upper side of the upper boundary of the target image;
(4) the N representative points projected to the target image are all below the lower boundary of the target image.
3. The method of screening subblock images according to claim 1 or 2, wherein N is 8.
4. The method of claim 1 or 2, wherein the two depth parameters z _ near _ new and z _ far _ new of the sub-block image are determined by one of the following methods:
(1) decoding the code stream to obtain a nearest depth value z _ near of the source view image and a farthest depth value z _ far of the source view image, wherein z _ near _ new of the sub-block image is equal to z _ near, and z _ far _ new of the sub-block image is equal to z _ far;
(2) the z _ near _ new of the sub-block image is equal to the nearest depth value of the sub-block image, and the z _ far _ new of the sub-block image is equal to the farthest depth value of the sub-block image.
5. The method of claim 4, wherein the nearest depth value of the sub-block image and the farthest depth value of the sub-block image are determined by one of the following methods:
(1) directly decoding from the code stream to obtain the nearest depth value and the farthest depth value of the subblock image;
(2) in decoding the reconstructed depth image, the depth value of the pixel which is closest to the source viewpoint to which the sub-block belongs in all the pixels of the sub-block image is the closest depth value of the sub-block; and the depth value of the pixel which is farthest away from the source viewpoint of the sub-block in all the pixels of the sub-block image is the farthest depth value of the sub-block.
6. An apparatus for screening a subblock image, comprising:
the subblock image information extraction module inputs a multi-view subblock spliced image code stream and outputs at least one subblock image information, and the information comprises: the width information width and the height information height of the sub-block image, the position information of the sub-block image in the multi-viewpoint sub-block spliced image, the position information of the sub-block image in the source viewpoint image and the camera parameters of the source viewpoint image to which the sub-block image belongs, wherein the camera parameters comprise the camera orientation, the camera position coordinates and the camera internal parameters;
the target image related information acquisition module is used for acquiring width information width _ o, height information height _ o and camera parameters of the target image;
the subblock image depth parameter acquiring module is used for acquiring two depth parameters z _ near _ new and z _ far _ new, wherein the z _ near _ new is less than or equal to z _ far _ new;
a subblock image judging module, configured to utilize the camera parameters of the source viewpoint image and the camera parameters of the target image to obtain N spatial representative points (x) by combining four boundary vertices of a subblock image with depth informationi,yi,zi) Projecting to obtain N representative points in the target image; wherein N is the number of space representative points, and i is an integer from 0 to N-1; according to the position coordinates (xo) of N representative points in the obtained target imagei,yoi) Whether the sub-block image and the target image have region overlapping is judged in advance; and if the areas are overlapped, rendering by using pixels in the sub-block images to obtain partial target images, and otherwise, rendering by not using the sub-block images.
7. The apparatus of claim 6, wherein if any one of the following conditions is satisfied, it is predetermined that the sub-block image overlaps with the target image:
(1) the N representative points projected to the target image are all positioned on the left side of the left boundary of the target image;
(2) the N representative points projected to the target image are all positioned on the right side of the right boundary of the target image;
(3) the N representative points projected to the target image are all positioned on the upper side of the upper boundary of the target image;
(4) the N representative points projected to the target image are all below the lower boundary of the target image.
8. The apparatus of claim 6 or 7, wherein N is 8.
9. The apparatus of claim 6 or 7, wherein the two depth parameters z _ near _ new and z _ far _ new of the sub-block image are selected from one of the following methods: to obtain
(1) Decoding the code stream to obtain a nearest depth value z _ near of the source view image and a farthest depth value z _ far of the source view image, wherein z _ near _ new of the sub-block image is equal to z _ near, and z _ far _ new of the sub-block image is equal to z _ far;
(2) the z _ near _ new of the sub-block image is equal to the nearest depth value of the sub-block image, and the z _ far _ new of the sub-block image is equal to the farthest depth value of the sub-block image.
10. The apparatus of claim 9, wherein the nearest depth value of the sub-block image and the farthest depth value of the sub-block image are obtained by one of:
(1) directly decoding from the code stream to obtain the nearest depth value and the farthest depth value of the subblock image;
(2) in decoding the reconstructed depth image, the depth value of the pixel which is closest to the source viewpoint to which the sub-block belongs in all the pixels of the sub-block image is the closest depth value of the sub-block; and the depth value of the pixel which is farthest away from the source viewpoint of the sub-block in all the pixels of the sub-block image is the farthest depth value of the sub-block.
11. A method of screening processing units, comprising:
for at least one processing unit in the multi-viewpoint sub-block spliced image, calculating the width information width and height information height of the processing unit according to the width W and height H of the information transmission unit corresponding to the processing unit in the code stream, wherein the method comprises the following steps:
width=min{w0,W-Δw};
height=min{h0,H-Δh};
wherein the content of the first and second substances,{w0,h0{ Δ w, Δ h } are the position offsets of the processing unit with respect to the information transmission unit, respectively;
acquiring the position information of the information transmission unit in the multi-viewpoint sub-block spliced image and the position information of the information transmission unit in the source viewpoint image from the code stream;
calculating the position information of the processing unit in the multi-viewpoint sub-block stitched image and the position information of the information transmission unit in the source viewpoint image according to the position information of the information transmission unit in the multi-viewpoint sub-block stitched image, the position information of the information transmission unit in the source viewpoint image and the offset { delta w, delta h } of the position of the processing unit relative to the position of the information transmission unit;
acquiring camera parameters of a source viewpoint image to which the processing unit belongs, wherein the camera parameters comprise camera orientation, camera position coordinates and camera internal parameters;
acquiring width information width _ o, height information height _ o and camera parameters of a target image;
two depth parameters of the processing unit are obtained: z _ near _ new and z _ far _ new, wherein z _ near _ new is less than or equal to z _ far _ new;
combining four boundary vertexes of the sub-block image with z _ near _ new and z _ far _ new to obtain N space representative points (x) by using the camera parameters of the source viewpoint image and the camera parameters of the target imagei,yi,zi) Projecting to obtain N representative points in the target image; wherein N is the number of space representative points, and i is an integer from 0 to N-1;
according to the position coordinates (xo) of N representative points in the obtained target imagei,yoi) Judging whether the processing unit and the target image have area overlapping or not in advance;
and if the areas are overlapped, rendering by using pixels in the processing unit to obtain a part of target image, otherwise, rendering without using the processing unit.
12. The method of claim 11, wherein the predetermined decision processing unit overlaps with the target image existence region if any one of the following conditions is satisfied:
(1) the N representative points projected to the target image are all positioned on the left side of the left boundary of the target image;
(2) the N representative points projected to the target image are all positioned on the right side of the right boundary of the target image;
(3) the N representative points projected to the target image are all positioned on the upper side of the upper boundary of the target image;
(4) the N representative points projected to the target image are all below the lower boundary of the target image.
13. A method of screening processing units according to claim 11 or 12, further characterized in that the two depth parameters z _ near _ new and z _ far _ new of the processing units are determined by one of the following methods:
(1) decoding the code stream to obtain a nearest depth value z _ near of the source view image and a farthest depth value z _ far of the source view image, wherein z _ near _ new of the processing unit is equal to z _ near, and z _ far _ new of the processing unit is equal to z _ far;
(2) z _ near _ new of the processing unit equals the nearest depth value of the processing unit and z _ far _ new of the processing unit equals the farthest depth value of the processing unit.
14. The method of claim 13, wherein the nearest depth value of the processing unit and the farthest depth value of the processing unit are determined by one of:
(1) directly decoding from the code stream to obtain the nearest depth value and the farthest depth value of the processing unit;
(2) in decoding the reconstructed depth image, the depth value of the pixel which is closest to the source viewpoint to which the processing unit belongs in all the pixels of the processing unit is the closest depth value of the processing unit; and the depth value of the pixel which is farthest away from the source viewpoint to which the processing unit belongs in all the pixels of the processing unit is the farthest depth value of the processing unit.
15. An apparatus for screening processing units, comprising:
the information extraction module of the information transmission unit inputs the code stream of the multi-viewpoint sub-block spliced image and outputs the code stream of the multi-viewpoint sub-block spliced image, the width W and the height H of the information transmission unit, the position information of the information transmission unit in the multi-viewpoint sub-block spliced image and the position information of the information transmission unit in the source viewpoint image;
a processing unit information extraction module, input as default width and height of the processing unit { w0,h0Outputting the position offset { Δ w, Δ h } of the processing unit relative to the information transmission unit and the information transmission unit information as processing unit information, including: width and height of the processing unit, position information of the processing unit in the multi-view sub-block stitched image, position information of the processing unit in the source view image and camera parameters of the source view image to which the processing unit belongs, wherein the camera parameters comprise camera orientation, camera position coordinates and camera internal parameters, and the processing unit information extraction module transmits unit information, { w0,h0Calculating processing unit information according to the following calculation methods:
width=min{w0,W-Δw};
height=min{h0,H-Δh};
the target image related information acquisition module is used for acquiring width information width _ o, height information height _ o and camera parameters of the target image;
the processing unit depth parameter acquisition module is used for acquiring two depth parameters z _ near _ new and z _ far _ new, wherein the z _ near _ new is less than or equal to z _ far _ new;
a subblock image judging module, configured to utilize the camera parameters of the source viewpoint image and the camera parameters of the target image to obtain N spatial representative points (x) by combining four boundary vertices of a subblock image with depth informationi,yi,zi) Projecting to obtain N representative points in the target image; wherein N is the number of space representative points, and i is an integer from 0 to N-1; according to whatThe position coordinates (xo) of the N representative points in the obtained target imagei,yoi) Whether the sub-block image and the target image have region overlapping is judged in advance; and if the areas are overlapped, rendering by using pixels in the sub-block images to obtain partial target images, and otherwise, rendering by not using the sub-block images.
16. The apparatus of claim 15, wherein the predetermined decision processing unit overlaps with the target image existence region if any one of the following conditions is satisfied:
(1) the N representative points projected to the target image are all positioned on the left side of the left boundary of the target image;
(2) the N representative points projected to the target image are all positioned on the right side of the right boundary of the target image;
(3) the N representative points projected to the target image are all positioned on the upper side of the upper boundary of the target image;
(4) the N representative points projected to the target image are all below the lower boundary of the target image.
17. The apparatus of claim 15 or 16, wherein the two depth parameters z _ near _ new and z _ far _ new of the processing unit are selected from one of the following methods: obtaining:
(1) decoding the code stream to obtain a nearest depth value z _ near of the source view image and a farthest depth value z _ far of the source view image, wherein z _ near _ new of the processing unit is equal to z _ near, and z _ far _ new of the processing unit is equal to z _ far;
(2) z _ near _ new of the processing unit equals the nearest depth value of the processing unit and z _ far _ new of the processing unit equals the farthest depth value of the processing unit.
18. The apparatus of claim 17, further characterized in that the nearest depth value of the processing unit and the farthest depth value of the processing unit are obtained by one of:
(1) directly decoding from the code stream to obtain the nearest depth value and the farthest depth value of the processing unit;
(2) in decoding the reconstructed depth image, the depth value of the pixel which is closest to the source viewpoint to which the processing unit belongs in all the pixels of the processing unit is the closest depth value of the processing unit; and the depth value of the pixel which is farthest away from the source viewpoint to which the processing unit belongs in all the pixels of the processing unit is the farthest depth value of the processing unit.
CN202010019369.2A 2019-10-01 2020-01-08 Method and device for screening subblock images and processing units Active CN112598572B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910946071 2019-10-01
CN2019109460713 2019-10-01

Publications (2)

Publication Number Publication Date
CN112598572A CN112598572A (en) 2021-04-02
CN112598572B true CN112598572B (en) 2022-04-15

Family

ID=75180210

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010019369.2A Active CN112598572B (en) 2019-10-01 2020-01-08 Method and device for screening subblock images and processing units

Country Status (1)

Country Link
CN (1) CN112598572B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114727084B (en) * 2021-01-04 2023-10-03 浙江大学 Method and device for screening images
CN113347403B (en) * 2021-04-19 2023-10-27 浙江大学 Image processing method and device
WO2023050432A1 (en) * 2021-09-30 2023-04-06 浙江大学 Encoding and decoding methods, encoder, decoder and storage medium
WO2023142127A1 (en) * 2022-01-30 2023-08-03 浙江大学 Coding and decoding methods and apparatuses, device, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103716629A (en) * 2012-09-29 2014-04-09 华为技术有限公司 Image processing method, device, coder and decoder
CN106791331A (en) * 2017-01-13 2017-05-31 成都微晶景泰科技有限公司 Image processing method, device and imaging system based on lens array imaging
CN107808383A (en) * 2017-10-13 2018-03-16 上海无线电设备研究所 SAR image target quick determination method under a kind of strong sea clutter

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130063487A1 (en) * 2011-09-12 2013-03-14 MyChic Systems Ltd. Method and system of using augmented reality for applications

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103716629A (en) * 2012-09-29 2014-04-09 华为技术有限公司 Image processing method, device, coder and decoder
CN106791331A (en) * 2017-01-13 2017-05-31 成都微晶景泰科技有限公司 Image processing method, device and imaging system based on lens array imaging
CN107808383A (en) * 2017-10-13 2018-03-16 上海无线电设备研究所 SAR image target quick determination method under a kind of strong sea clutter

Also Published As

Publication number Publication date
CN112598572A (en) 2021-04-02

Similar Documents

Publication Publication Date Title
CN112598572B (en) Method and device for screening subblock images and processing units
US11509933B2 (en) Method, an apparatus and a computer program product for volumetric video
ES2664746T3 (en) Effective prediction using partition coding
EP2382791B1 (en) Depth and video co-processing
JP6005731B2 (en) Scale independent map
ES2676055T3 (en) Effective image receiver for multiple views
Salahieh et al. Test model for immersive video
US11069026B2 (en) Method for processing projection-based frame that includes projection faces packed in cube-based projection layout with padding
JP2022528540A (en) Point cloud processing
JP7344988B2 (en) Methods, apparatus, and computer program products for volumetric video encoding and decoding
US11948268B2 (en) Immersive video bitstream processing
CN111936929A (en) Method for reconstructed sample adaptive offset filtering of projection-based frames using projection layout of 360 ° virtual reality projection
CN114727084B (en) Method and device for screening images
EP2839437B1 (en) View synthesis using low resolution depth maps
CN113347403B (en) Image processing method and device
WO2021136372A1 (en) Video decoding method for decoding bitstream to generate projection-based frame with guard band type specified by syntax element signaling
Guo et al. A three-to-dense view conversion system based on adaptive joint view optimization
WO2023227582A1 (en) Decoder side depth and color alignment with the assistance of metadata for the transcoding of volumetric video
CN109547772A (en) The method for promoting naked eye three-dimensional mosaic screen three dimensional mass

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant