CN110602398A - Ultrahigh-definition video display method and device - Google Patents

Ultrahigh-definition video display method and device Download PDF

Info

Publication number
CN110602398A
CN110602398A CN201910874253.4A CN201910874253A CN110602398A CN 110602398 A CN110602398 A CN 110602398A CN 201910874253 A CN201910874253 A CN 201910874253A CN 110602398 A CN110602398 A CN 110602398A
Authority
CN
China
Prior art keywords
coding
path
video data
picture
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910874253.4A
Other languages
Chinese (zh)
Inventor
赵月峰
袁潮
温建伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhuohe Technology Co ltd
Original Assignee
Beijing Zhuohe Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhuohe Technology Co Ltd filed Critical Beijing Zhuohe Technology Co Ltd
Priority to CN201910874253.4A priority Critical patent/CN110602398A/en
Publication of CN110602398A publication Critical patent/CN110602398A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/015High-definition television systems

Abstract

The invention discloses an ultra-high-definition video display method and device, which are applied to the video display process of an array camera, wherein the array camera comprises a plurality of long-focus cameras for shooting videos in different areas, and a plurality of paths of video pictures shot by the plurality of long-focus cameras are spliced into a global picture; in a decoding stage, receiving a coding block contained in each path of video data, wherein each path of video data is divided into a plurality of coding blocks with different coding information in a coding stage; acquiring a first area range of a target playing picture area relative to a global picture; and determining the coding block with the overlapped part in the first area range, and decoding, splicing, fusing and displaying the coding block, so that the load of a decoder is greatly reduced, and spliced pictures of more paths of videos can be decoded simultaneously under the hardware condition with the same decoding capacity.

Description

Ultrahigh-definition video display method and device
Technical Field
The invention relates to the technical field of image processing, in particular to an ultrahigh-definition video display method and device.
Background
An International Telecommunication Union (ITU) published International standard for ultra high definition television (ultra hdtv) at 23/8/2012, ITU-R recommendation bt.2020, which redefines various parameter indexes related to ultra high definition video display in the fields of television broadcasting and consumer electronics and performs corresponding specifications on resolution, color space, frame rate, color coding and the like of ultra high definition video display. The ultra-high-definition video display system can be in two stages of 4K and 8K, wherein the physical resolution of the 4K ultra-high-definition video display system is 3840 × 2160, the physical resolution of the 8K ultra-high-definition video display system is 7680 × 4320, the display ratio is 16:9, and the frame rates of 120fps, 60fps, 59.94fps, 50fps, 30fps, 29.97fps, 25fps, 24fps and 23.976fps are supported, and 9 frame rates are total.
With the increasing requirements of people on the definition of images and videos, namely the resolution of the images and videos, the ultrahigh-definition graphics and video display functions are introduced in a large number in more and more fields, such as public transportation management centers, combat command centers, railway management centers and the like.
One conventional method for displaying ultra-high-definition video is to use a wide-angle lens and a plurality of telephoto lenses to form an array camera, where the wide-angle lens is responsible for capturing panoramic images and the plurality of telephoto lenses are used for capturing high-resolution detailed video in a specific area. When a high-resolution panoramic video or a high-resolution local video containing a plurality of areas needs to be displayed, a plurality of paths of high-resolution videos shot by a plurality of telephoto lenses need to be spliced, output and displayed, and a plurality of paths of ultra-high-definition videos need to be decoded simultaneously before display. However, according to the relevant test results, the rate of frame of the 4K video that can be decoded and displayed simultaneously by the highest-performance consumer graphics card on the market is up to 25 frames. If the video stream is more and exceeds the upper limit value, or the frame rate is higher, the video is obviously displayed and displayed, so that the condition that at most a certain amount of ultra-high-definition videos can be displayed at the same time and more contents cannot be displayed is limited during video display.
At present, a general encoding and decoding method is to uniformly encode and decode the entire video picture collected by a telephoto lens. Such a codec has a problem that, even if only a small part of a picture is displayed in video display, the entire video picture needs to be decoded, which greatly wastes decoder resources and causes decoder inefficiency. For example, one video card can decode 6 paths of video at most, the array camera contains 9 telephoto lenses in a3 × 3 array in total, the video captured by the array camera can be spliced into a graph as shown in fig. 1, and in the prior art, when a user wants to show an ultra high definition video picture displayed in an area a in fig. 1, the video captured by the 9 paths of telephoto lenses needs to be decoded completely. However, as the video card can decode only 6 channels of video at most, when 9 channels of ultra-high-definition video need to be decoded, the content of some areas cannot be displayed or the phenomena of jamming and the like inevitably occur, which affects the use experience of the user.
Disclosure of Invention
In order to solve the technical problem, the invention provides an ultrahigh-definition video display method and device.
The ultrahigh-definition video display method provided by the invention is applied to the video display process of an array camera, wherein the array camera comprises a plurality of long-focus cameras for shooting videos in different areas, a plurality of paths of video pictures shot by the long-focus cameras are spliced into a global picture, the method comprises an encoding stage and a decoding stage, and the encoding stage of the method comprises the following steps:
determining a segmentation method of each path of video data;
according to the dividing method, dividing each path of video data into a plurality of coding blocks with different coding information;
respectively encoding each encoding block in each path of video data;
the decoding phase of the method comprises the steps of:
receiving a coding block contained in each path of video data, wherein each path of video data is divided into a plurality of coding blocks with different coding information in a coding stage;
acquiring a first area range of a target playing picture area relative to the global picture;
determining a coding block having a superposition part with the first area range, and decoding the coding block;
and splicing and fusing the decoded coding blocks, and displaying.
The method also has the following characteristics: the receiving of the coding block included in each path of video data includes:
receiving coding information of each coding block contained in each path of video data, and determining a second area range of a video picture corresponding to the coding block relative to the global picture according to the coding information;
the determining of the coding block having the overlapping part with the first region range, and the decoding of the coding block includes:
and judging whether the second area range of each coding block has a superposition part with the first area range, and if so, decoding the coding block.
The method also has the following characteristics: the receiving the coding information of each coding block contained in each path of video data, and determining a second area range of a video picture corresponding to the coding block relative to the global picture according to the coding information includes:
acquiring the area range of the picture corresponding to each path of video data relative to the global picture as conversion information;
and determining a second area range of the video picture corresponding to the coding block relative to the global picture according to the conversion information and the coding information.
The data processing method also has the following characteristics: the determining of the coding block having the overlapping part with the first region range, and the decoding of the coding block includes:
and determining the coding blocks passed by the boundary of the first area range and the coding blocks which are all positioned in the first area range, and decoding the coding blocks.
The method also has the following characteristics: the method for determining the segmentation method of each path of video data comprises the following steps:
and determining the segmentation method according to the complexity of pictures in the video pictures corresponding to each path of video data.
The invention also provides an ultra-high-definition video display device, which is applied to the video display process of the array camera, wherein the array camera comprises a plurality of long-focus cameras for shooting videos in different areas, a plurality of paths of video pictures shot by the long-focus cameras are spliced into a global picture, the device comprises an encoding module and a decoding module, and the encoding module of the device comprises:
an encoding determination unit, configured to determine a segmentation method of each path of video data;
a dividing unit, for dividing each path of video data into a plurality of coding blocks with different coding information according to the dividing method;
the coding unit is used for coding each coding block in each path of video data respectively;
the decoding module of the apparatus comprises:
a receiving unit, configured to receive a coding block included in each path of video data, where each path of video data is divided into multiple coding blocks with different coding information in a coding stage;
the first acquisition unit is used for acquiring a first area range of a target playing picture area relative to the global picture;
a decoding unit, configured to determine a coding block having a portion overlapping with the first region range, and decode the coding block;
and the splicing and fusing unit is used for splicing and fusing the decoded coding blocks and displaying the coding blocks.
The device also has the following characteristics: the receiving unit further includes:
a receiving subunit, configured to receive coding information of each coding block included in each path of video data;
the processing subunit is configured to determine, according to the coding information, a second area range of the video picture corresponding to the coding block relative to the global picture;
the decoding unit includes:
a judging subunit, configured to judge whether the second area range of each of the coding blocks has an overlapping portion with the first area range;
a first decoding subunit, configured to decode the encoded block when the second region range of the decoded block has a coinciding portion with the first region range.
The device also has the following characteristics: the receiving subunit is configured to acquire, as conversion information, an area range of a picture corresponding to each path of video data relative to the global picture;
and the processing subunit is configured to determine, according to the conversion information and the coding information, a second region range of the video picture corresponding to the coding block relative to the global picture.
The device also has the following characteristics: the decoding unit includes:
a decoding determining unit, configured to determine the coding blocks that the boundary of the first region range passes through, and the coding blocks that are all located within the first region range;
and the second decoding subunit is used for decoding the coding blocks passed by the boundary of the first area range and the coding blocks which are all positioned in the first area range.
The device also has the following characteristics: the encoding determination unit is further configured to determine the segmentation method according to the complexity of pictures in the video pictures corresponding to each path of video data.
In the encoding stage, each path of video data in a plurality of paths of video pictures shot by an array camera is encoded in a preset mode; when the ultrahigh-definition video is displayed, only the coding block which is overlapped with the target playing picture region is decoded and displayed in the decoding stage, so that the load of a decoder is greatly reduced, spliced pictures of more paths of videos can be decoded simultaneously under the condition of hardware with the same decoding capacity, and the multiple paths of video pictures are spliced and fused for display after decoding, so that the decoding resources are saved while a better display effect is obtained, and the ultrahigh-definition video decoding method is suitable for large-scale popularization and use.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of the encoding phase in an embodiment;
FIG. 2 is a flow diagram of the decoding phase in an embodiment;
FIG. 3 is a flowchart of a video display process in the embodiment;
FIG. 4 is a display screen diagram in the embodiment;
FIG. 5 is a block diagram of an encoding module in an embodiment;
FIG. 6 is a block diagram of a decoding module in an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. It should be noted that the embodiments and features of the embodiments in the present application may be arbitrarily combined with each other without conflict.
The application provides an ultra-high-definition video display method which is applied to the video display process of an array camera, wherein the array camera comprises a wide-angle camera for shooting a panoramic picture and a plurality of long-focus cameras which are arranged in certain positions into an array and used for shooting videos in different areas in the panoramic picture. The high-definition images shot by the wide-angle camera are low in definition, the images shot by the long-focus cameras are high in definition, and the high-definition images shot by the long-focus cameras can be arranged and spliced into a panoramic image according to a certain position, so that the panoramic image is displayed in a high-definition mode.
As shown in fig. 1, when displaying ultra high definition video shot by an array camera, the display method includes an encoding stage in which the ultra high definition video shot by each camera of the array camera is divided and encoded. Specifically, the encoding stage includes:
and S01, determining the segmentation method of each path of video data.
In this step, a video picture taken by each camera in the array camera is taken as one path of video data, that is, one camera takes one path of video data. When each path of video data is divided, the same style method may be used for each path of video, for example, a4 × 4 style method is used for cameras in all array cameras, and the division method is obvious in regularity and convenient to encode and decode. Of course, each path of video may also adopt a different splitting method, for example, a splitting method of 2 rows x2 columns for a part of the video captured by the camera, and a splitting method of 3 rows x3 columns for a part of the video captured by the camera, and such splitting methods may be appropriately adjusted according to needs, and are more flexible and variable.
In a preferred embodiment, in order to reduce the load in the decoding stage, when performing segmentation, the segmentation method is determined according to the complexity level in the video picture corresponding to each path of video data. That is, when the image content in the video image corresponding to one path of video data is more complex, the video data is divided into a large number of coding blocks; when the image content in the video picture corresponding to one path of video data is simpler, the video data is divided into a small number of coding blocks. For example, when the content shot by one path of video picture is an open sky or an impracticable grassland, because the color of the sky and the grassland is relatively single and other things in the sky and the grassland are less, the path of video data occupies less decoding resources in the decoding process, even if the path of video is used as a coding block for coding, excessive decoding resources are not consumed in the decoding stage, the path of video data can be divided into a small number of coding blocks in the coding stage, and the coding efficiency is improved. When the content of one video picture is complex, such as a street full of people or a road with complex road conditions, because the content of the picture is too much, if the picture to be displayed only contains a small part of the video picture, and the whole video data needs to be decoded, the video data occupies more decoding resources in the decoding process, and the consumed time is longer. Therefore, a video picture with complicated shooting contents is divided into a large number of coding blocks to reduce the amount of data to be decoded. When the content in the picture content is judged to be complex, the number of the pixels in the picture can be judged to be the same number, when the pixels of which the number is larger than the preset number in all the pixels are the same number, the picture content is simple, and under the other conditions, the picture content is complex. Of course, it is understood that other methods capable of determining the picture content in the prior art can be applied to the method of the present invention.
Further, the encoding stage further comprises:
s02, according to the dividing method, dividing each path of video data into a plurality of coding blocks with different coding information;
and S03, coding each coding block in each path of video data respectively.
In S02, when dividing each path of video data, it is preferable to divide the path of video data into 4 rows by x4 columns, that is, into 16 coded blocks in total. Generally, the picture presented by each path of video data is a rectangular picture, coding information is given to the coding blocks according to the position of each coding block on the rectangular picture, and the coding information of each coding block is different, so that the coding blocks corresponding to the selected coding information can be conveniently decoded in the following process. In a specific embodiment, the array camera includes four telephoto cameras arranged according to 2 rows and x2 columns, as shown in fig. 4, so that a global picture is formed by splicing pictures corresponding to four paths of video data, which are respectively an area a, an area B, an area C, and an area D, each area corresponds to one path of video data, each path of video data is further divided into 16 coding blocks of 4 rows and x4 columns, coding information of the 16 coding blocks in the area a is a11, a12, a13, a14 located in a first row, a21, a22, a23, a24 located in a second row, a31, a32, a33, a34 located in a third row, and a41, a42, a43, a44 located in a fourth row. The coded information of the 16 coding blocks in the region B is B11, B12, B13 and B14 positioned in the first row, B21, B22, B23 and B24 positioned in the second row, B31, B32, B33 and B34 positioned in the third row and B41, B42, B43 and B44 positioned in the fourth row. The coded information of the 16 coding blocks in the region C is C11, C12, C13 and C14 located in the first row, C21, C22, C23 and C24 located in the second row, C31, C32, C33 and C34 located in the third row and C41, C42, C43 and C44 located in the fourth row. The coded information of the 16 coding blocks in the region D is D11, D12, D13 and D14 positioned in the first row, D21, D22, D23 and D24 positioned in the second row, D31, D32, D33 and D34 positioned in the third row and D41, D42, D43 and D44 positioned in the fourth row.
It should be noted that, when a coding block is divided, the coding block is divided by the number of rows and the number of columns in terms of the overall division effect, but the division is substantially performed according to coordinates, each coding block corresponds to a square, a connection line of coordinates of four vertices of the square forms a video picture corresponding to the coding block, and the coding information of the coding block corresponds to the relative area coordinates of the coding block with respect to the overall picture corresponding to the way of video data in a one-to-one manner. When the coding block is decoded, the relative area coordinate of the coding block relative to the whole picture corresponding to the video data can be quickly determined through the coding information of the coding block, and then the position of the coding block is accurately determined, so that the decoding accuracy is ensured.
In S03, each coding block is coded by a coding method that is well known in the art without any limitation. Meanwhile, when decoding, the coding block may be decoded by using a decoding method corresponding to the coding method.
After the encoding is finished, the data transmission unit of the encoding module transmits the encoding blocks contained in each path of video data to a module to be decoded of a device for displaying video images.
As shown in fig. 2, it is a decoding stage of the ultra high definition video display method provided by the present invention. The decoding stage comprises the following steps:
s11, receiving coding blocks contained in each path of video data, wherein each path of video data is divided into a plurality of coding blocks with different coding information in the coding stage;
s12, acquiring a first area range of the target playing picture area relative to the global picture;
s13, determining the coding block having the overlapping part with the first area range, and decoding the coding block.
In S11, each received coding block has its unique coding information, and the coding information of the coding block is in one-to-one correspondence with the relative area coordinates of the entire picture corresponding to the picture of the coding block and the video data to which the picture belongs.
In S12, the display device for displaying the global screen and the global screen are both rectangular in general, and the target playback screen may be the global screen or a partial screen of the entire screens. When the playing picture is a global picture, the first region range is a range formed by absolute coordinates of a region defined by one vertex angle of the global picture as an origin of coordinates and the other three vertices and the origin of coordinates. Needless to say, when the first area range is a global picture, all encoded blocks included in the received multi-path video data need to be decoded to be able to view the global picture. When a target plays a picture, when a part of the picture for selection in the global picture is a rectangular picture, the part of the picture is also a rectangular picture in general, and the first area range is an area surrounded by absolute coordinate connecting lines of four corners of the rectangular picture selected by a user. Here, it should be noted that, if the coordinate of a point is relative to the coordinate system in which the global frame is located, the coordinate is an absolute coordinate; if the coordinates of a point are relative to the entire picture (i.e., the picture corresponding to a piece of video data), then the coordinates are relative coordinates. In the above description, the coordinates corresponding to the encoded information are relative coordinates, and the coordinates corresponding to the first area range are absolute coordinates.
In a specific embodiment, the receiving the coding block included in each path of video data in S11 includes: and receiving coding information of each coding block contained in each path of video data, and determining a second area range of the video picture corresponding to the coding block relative to the global picture according to the coding information. Further, the receiving coding information of each coding block included in each path of video data, and determining, according to the coding information, a second area range of a video picture corresponding to the coding block relative to the global picture includes:
acquiring the area range of the picture corresponding to each path of video data relative to the global picture as conversion information;
and determining a second area range of the video picture corresponding to the coding block relative to the global picture according to the conversion information and the coding information.
Further, the step S13 is specifically: and judging whether the second area range of each coding block has a superposition part with the first area range, and if so, decoding the coding block.
In this embodiment, coordinates with respect to all the pictures are referred to as absolute coordinates, and coordinates with respect to the whole picture corresponding to each path of video data are referred to as relative coordinates, and since the first area range is the area of the target play picture with respect to the global picture, and the second area range is the area of the video picture corresponding to the encoding block with respect to the global picture, the coordinates of both the first area range and the second area range are absolute coordinates. The method comprises the steps of receiving coding information of coding blocks, arranging a plurality of coding blocks according to preset positions to form an integral picture corresponding to a path of video data, wherein the coding information corresponds to the coding blocks one by one, and the coordinate of each coding block relative to the integral picture is a relative area coordinate. Meanwhile, the absolute coordinates of the whole picture corresponding to each path of video data relative to the global picture are used as conversion information, and the conversion information is absolute coordinates because the conversion information is relative to the whole picture relative to all pictures. The absolute coordinates of the whole picture corresponding to each path of video data relative to the global picture and the relative area coordinates of the picture corresponding to the coding block relative to the whole picture corresponding to the path of video data are known, coordinate conversion is carried out, the absolute coordinates of the picture corresponding to the coding block relative to the global picture can be clearly determined, and the range surrounded by the absolute coordinates is the second area range of the video picture corresponding to the coding block relative to the global picture. Of course, it can be understood that, in order to quickly locate the coding blocks, first determining the coding information of the coding blocks, for example, after obtaining the coding information of a11, C32, etc., determining the relative coordinates of the coding blocks with respect to the whole picture, and then performing coordinate transformation to determine the second area range of the coding blocks with respect to the global picture.
Then in S13, it is determined for each received encoded block whether the second region of the encoded block overlaps the first region, and if so, the encoded block is decoded. In the judgment process, the essence of the judgment is to judge the coordinates of the first area range and the second area range, and in the embodiment, the coding blocks are taken as a judgment subject, the first area range is taken as a comparison subject, and each coding block is compared with the unique unchanged first area range. In this embodiment, a specific determination method of whether to overlap is to determine whether the coordinates of the second area range corresponding to the coding block and the coordinates of the first area range have the same absolute coordinate value, and if the coordinates have the same absolute coordinate value, which indicates that the first area range and the second area range have an overlapping portion, the coding block corresponding to the second area range needs to be decoded; if the coordinates of the first area range and the coordinates of the second area range are completely different, the first area range and the second area range do not have an overlapped part, and the coding block corresponding to the second area range can be ignored.
In another specific embodiment, in S13, the determining a coding block having a coincidence with the first region range includes: and determining the coding blocks passed by the boundary of the first area range and the coding blocks which are all positioned in the first area range, and decoding the coding blocks. In this embodiment, the essence of the determination is to determine the coordinates of the first area range and the second area range, but in this embodiment, unlike the above-described embodiment, the coordinates of the first area range are used as the determination subject, the coding block is used as the comparison subject, the second area range corresponding to the coding block is kept unchanged, whether the boundary of the first area range passes through the second area range of the coding block is determined, whether the coding block falls into the first area range is determined, and if the above-described one condition is satisfied, it is determined that the first area range and the second area range overlap, and the coding block corresponding to the first area range concerned needs to be decoded.
In addition, the decoding stage of the display method in the present application further includes: and splicing and fusing the decoded coding blocks, and displaying. Since the decoding blocks are multiple scattered data packets belonging to multiple paths of different video data, when decoding is completed, video pictures corresponding to the multiple decoding blocks need to be spliced and fused to complete picture display.
As shown in fig. 3 and 4, the following describes a method for displaying ultra high definition video according to the present application by using a complete embodiment. The 4 telephoto cameras are arranged in 2 rows by x2 columns, and are spliced into a global picture in fig. 4, the lower left corner of the area a is defined as an absolute coordinate origin based on the orientation in fig. 4, and a rectangular coordinate system is established. The coordinates of the four vertices of the area a are (0, 0), (0, 4), (4, 0); the coordinates of the four vertices of the region B are (0, 4), (0, 8), (4, 4); the coordinates of the four vertices of the region C are (4, 4), (8, 8), (8, 4); the coordinates of the four vertices of the region D are (4, 0), (4, 4), (8, 4), and (8, 0).
First, a division scheme of each of four paths of video data is determined, and a division method of 4 rows x4 columns is adopted in the present embodiment.
Next, each video data is divided into 16 encoded blocks by a division method of 4 rows by x4 columns. When the vertex at the lower left corner of the area a is used as the origin of coordinates, the coding block coding information of the video data corresponding to the area a is a11, a12, a13 and a14 which are located in the first row from top to bottom, the relative coordinates of the vertex coordinates of the picture corresponding to the a11 coding block with respect to the whole picture are (0, 3), (0, 4), (1, 4) and (1, 3), the relative coordinates of the vertex coordinates of the picture corresponding to the a14 coding block with respect to the whole picture are (3, 3), (3, 4), (4, 4) and (4, 3), and the vertex coordinates of the pictures corresponding to the other coding blocks are analogized according to the above rule, and detailed description is not repeated. A21, a22, a23 and a24 in the second row, a31, a32, a33 and a34 in the third row, and a41, a42, a43 and a44 in the fourth row. The coded information of the 16 coding blocks in the region B is B11, B12, B13 and B14 positioned in the first row, B21, B22, B23 and B24 positioned in the second row, B31, B32, B33 and B34 positioned in the third row and B41, B42, B43 and B44 positioned in the fourth row. The coded information of the 16 coding blocks in the region C is C11, C12, C13 and C14 located in the first row, C21, C22, C23 and C24 located in the second row, C31, C32, C33 and C34 located in the third row and C41, C42, C43 and C44 located in the fourth row. The coded information of the 16 coding blocks in the region D is D11, D12, D13 and D14 positioned in the first row, D21, D22, D23 and D24 positioned in the second row, D31, D32, D33 and D34 positioned in the third row and D41, D42, D43 and D44 positioned in the fourth row.
And after the division of the coding block is finished, the coding block is coded and then transmitted to a display device to be displayed.
All decoded blocks contained in each path of video data are received, and a first area range of the target playing picture is obtained, wherein the first area range is an area shown as an area E in figure 4. And determining the code blocks which coincide with the region E as A12, A13, A14, A22, A23, A24, A32, A33, A34, B32, B33, B34, B42, B43, B44, C31, C32, C41, C42, D11, D12, D21, D22, D31 and D32, wherein the rest code blocks do not coincide with the region E, so that the above listed code blocks are only required to be decoded, the code blocks are displayed after splicing and fusion, and the rest code blocks can be not decoded. Thereby reducing the load of the decoder and saving the decoder resources. Because the coding information of the coding blocks comprises the relative coordinates of the corresponding coding blocks relative to the whole picture, the absolute coordinates of the coding blocks relative to the global picture can be quickly obtained, so that the coding blocks can be quickly spliced and fused, and the display effect is better.
If the maximum processing capacity of the graphics card of the device for displaying the video images can only decode two paths of videos, namely the maximum processing capacity is to decode the data amount of 32 coding blocks at the same time. In the display method in the prior art, if the target playing picture area E shown in fig. 4 is to be displayed, the data size of 64 coding blocks of four channels of video needs to be decoded, and then the video can be completely displayed. However, since the maximum processing of the graphics card is only the data amount of 32 coding blocks, the screen of four-way video cannot be completely displayed, and the display may be incomplete or stuck. And a lot of video pictures which do not need to be displayed are decoded, so that the resources of the decoder are greatly wasted, and the calculation power of the decoder is insufficient. By adopting the display method in the application, each path of video is divided into 16 coding blocks, when the area E is displayed, four paths of video data do not need to be decoded simultaneously, and only 25 coding blocks which are overlapped with the area E need to be decoded, so that the maximum processing data volume of the display card is less than that of the display card, and the display effect can be ensured while the four paths of video data are decoded simultaneously.
By adopting the display method, when a user adjusts the target playing picture area, such as the upper moving area E, the lower moving area E, the left moving area E and the right moving area E, the coding block overlapped with the target playing picture can be quickly determined, and the related coding block can be quickly decoded. And if the part which is overlapped with the target playing picture does not exist in a certain coding block which is originally overlapped with the target playing picture, stopping decoding the coding block.
The application also provides an ultra-high-definition video display device which is applied to the video display process of the array camera, wherein the array camera comprises a plurality of long-focus cameras for shooting videos in different areas, and multiple paths of video pictures shot by the long-focus cameras are spliced into a global picture. The ultrahigh-definition video display device is matched with the ultrahigh-definition video display method for use so as to realize the ultrahigh-definition video display method.
The device comprises an encoding module and a decoding module, as shown in fig. 5, the encoding module comprises an encoding determining unit for determining the dividing method of each path of video data, and further, the encoding determining unit determines the dividing method according to the complexity of the picture in the video picture corresponding to each path of video data when determining the dividing method. That is, when the image content in the video image corresponding to one path of video data is more complex, the video data is divided into a large number of coding blocks; when the image content in the video picture corresponding to one path of video data is simpler, the video data is divided into a small number of coding blocks. For example, when the content shot by one path of video picture is an open sky or an impracticable grassland, because the color of the sky and the grassland is relatively single and other things in the sky and the grassland are less, the path of video data occupies less decoding resources in the decoding process, even if the path of video is used as a coding block for coding, excessive decoding resources are not consumed in the decoding stage, the path of video data can be divided into a small number of coding blocks in the coding stage, and the coding efficiency is improved. When the content of one video picture is complex, such as a street full of people or a road with complex road conditions, because the content of the picture is too much, if the picture to be displayed only contains a small part of the video picture, and the whole video data needs to be decoded, the video data occupies more decoding resources in the decoding process, and the consumed time is longer. Therefore, a video picture with complicated shooting contents is divided into a large number of coding blocks to reduce the amount of data to be decoded. When the content in the picture content is judged to be complex, the number of the pixels in the picture can be judged to be the same number, when the pixels of which the number is larger than the preset number in all the pixels are the same number, the picture content is simple, and under the other conditions, the picture content is complex. Of course, it is understood that other methods capable of determining the picture content in the prior art can be applied to the method of the present invention.
The encoding module further comprises a dividing unit for dividing each path of video data into a plurality of encoding blocks with different encoding information according to the dividing method. When a coding block is divided, the coding block is divided according to the number of rows and the number of columns in terms of overall dividing effect, but the coding block is divided according to coordinates, each coding block corresponds to a square, connecting lines of coordinates of four vertexes of the square form a video picture corresponding to the coding block, and coding information of the coding block corresponds to relative area coordinates of the coding block relative to an overall picture corresponding to the path of video data in a one-to-one mode. When the coding block is decoded, the relative area coordinate of the coding block relative to the whole picture corresponding to the video data can be quickly determined through the coding information of the coding block, and then the position of the coding block is accurately determined, so that the decoding accuracy is ensured. In addition, it can be understood that, when the picture corresponding to each path of video data is divided, the method of uniform division is not necessarily adopted, and the picture may also be divided unevenly, for example, the picture may be divided into two parts, in which an area with important content in the middle is large and an area without important content in the periphery is small.
And the coding unit is used for coding each coding block in each path of video data respectively. When each coding block is coded, the coding block is not limited too much, and the coding block can be coded by using a relatively mature coding mode in the prior art. Meanwhile, when decoding, the coding block may be decoded by using a decoding method corresponding to the coding method.
The encoding module further comprises a data transmission unit for transmitting the encoding blocks contained in each path of video data to a to-be-decoded module of a device for displaying video images for decoding after the encoding is completed.
Further, as shown in fig. 6, the decoding module of the apparatus includes a receiving unit, configured to receive an encoded block included in each path of video data, where each path of video data is divided into multiple encoded blocks with different encoding information in an encoding stage; the first acquisition unit is used for acquiring a first area range of a target playing picture area relative to the global picture; and the decoding unit is used for determining the coding block with the overlapped part with the first area range and decoding the coding block.
According to the difference of the specific display method, in a specific embodiment, the receiving unit includes a receiving subunit and a processing subunit, the receiving subunit is configured to receive coding information of each coding block included in each video data, and the processing subunit is configured to determine, according to the coding information, a second area range of the video picture corresponding to the coding block relative to the global picture. The decoding unit comprises a judging subunit, which is used for judging whether the second area range of each coding block has a superposition part with the first area range; a first decoding subunit, configured to decode the encoded block when the second region range of the decoded block has a coinciding portion with the first region range. Further, the receiving subunit is further configured to acquire, as conversion information, an area range of a picture corresponding to each path of video data relative to the global picture; and the processing subunit is further configured to determine, according to the conversion information and the coding information, a second region range of the video picture corresponding to the coding block relative to the global picture.
In another specific embodiment, the receiving unit comprises a receiving sub-unit and a processing sub-unit, but in this embodiment, the decoding unit comprises:
a decoding determining unit, configured to determine the coding blocks that the boundary of the first region range passes through, and the coding blocks that are all located within the first region range;
and the second decoding subunit is used for decoding the coding blocks passed by the boundary of the first area range and the coding blocks which are all positioned in the first area range.
And the decoding module further comprises a splicing and fusing unit for splicing and fusing the decoded coding blocks and displaying the coding blocks.
In addition, the present application also provides a computer-readable storage medium, in which a computer program is stored, and when the program is executed by a processor, the computer program is used for implementing the ultra high definition video display method as described above.
The above-described aspects may be implemented individually or in various combinations, and such variations are within the scope of the present invention.
It will be understood by those skilled in the art that all or part of the steps of the above methods may be implemented by instructing the relevant hardware through a program, and the program may be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, and the like. Alternatively, all or part of the steps of the foregoing embodiments may also be implemented by using one or more integrated circuits, and accordingly, each module/unit in the foregoing embodiments may be implemented in the form of hardware, and may also be implemented in the form of a software functional module. The present invention is not limited to any specific form of combination of hardware and software.
It is to be noted that, in this document, the terms "comprises", "comprising" or any other variation thereof are intended to cover a non-exclusive inclusion, so that an article or apparatus including a series of elements includes not only those elements but also other elements not explicitly listed or inherent to such article or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of additional like elements in the article or device comprising the element.
The above embodiments are merely to illustrate the technical solutions of the present invention and not to limit the present invention, and the present invention has been described in detail with reference to the preferred embodiments. It will be understood by those skilled in the art that various modifications and equivalent arrangements may be made without departing from the spirit and scope of the present invention and it should be understood that the present invention is to be covered by the appended claims.

Claims (10)

1. An ultra-high definition video display method is applied to an array camera video display process, wherein the array camera comprises a plurality of tele cameras for shooting videos in different areas, and multiple paths of video pictures shot by the tele cameras are spliced into a global picture, the method is characterized by comprising an encoding stage and a decoding stage, and the encoding stage of the method comprises the following steps:
determining a segmentation method of each path of video data;
according to the dividing method, dividing each path of video data into a plurality of coding blocks with different coding information;
respectively encoding each encoding block in each path of video data;
the decoding phase of the method comprises the steps of:
receiving a coding block contained in each path of video data, wherein each path of video data is divided into a plurality of coding blocks with different coding information in a coding stage;
acquiring a first area range of a target playing picture area relative to the global picture;
determining a coding block having a superposition part with the first area range, and decoding the coding block;
and splicing and fusing the decoded coding blocks, and displaying.
2. The method according to claim 1, wherein the receiving the coding blocks included in each path of video data comprises:
receiving coding information of each coding block contained in each path of video data, and determining a second area range of a video picture corresponding to the coding block relative to the global picture according to the coding information;
the determining of the coding block having the overlapping part with the first region range, and the decoding of the coding block includes:
and judging whether the second area range of each coding block has a superposition part with the first area range, and if so, decoding the coding block.
3. The ultra high definition video display method according to claim 2, wherein the receiving coding information of each coding block included in each path of video data, and determining a second area range of a video picture corresponding to the coding block relative to the global picture according to the coding information comprises:
acquiring the area range of the picture corresponding to each path of video data relative to the global picture as conversion information;
and determining a second area range of the video picture corresponding to the coding block relative to the global picture according to the conversion information and the coding information.
4. The method according to claim 1, wherein the determining a coding block having a coincidence with the first region range, and the decoding the coding block comprises:
and determining the coding blocks passed by the boundary of the first area range and the coding blocks which are all positioned in the first area range, and decoding the coding blocks.
5. The ultra high definition video display method according to claim 1, wherein the method of determining the segmentation method of each path of video data comprises:
and determining the segmentation method according to the complexity of pictures in the video pictures corresponding to each path of video data.
6. An ultra-high-definition video display device applied to an array camera video display process, wherein the array camera comprises a plurality of tele cameras for shooting videos of different areas, and multiple paths of video pictures shot by the plurality of tele cameras are spliced into a global picture, the device comprises an encoding module and a decoding module, and the encoding module of the device comprises:
an encoding determination unit, configured to determine a segmentation method of each path of video data;
a dividing unit, for dividing each path of video data into a plurality of coding blocks with different coding information according to the dividing method;
the coding unit is used for coding each coding block in each path of video data respectively;
the decoding module of the apparatus comprises:
a receiving unit, configured to receive a coding block included in each path of video data, where each path of video data is divided into multiple coding blocks with different coding information in a coding stage;
the first acquisition unit is used for acquiring a first area range of a target playing picture area relative to the global picture;
a decoding unit, configured to determine a coding block having a portion overlapping with the first region range, and decode the coding block;
and the splicing and fusing unit is used for splicing and fusing the decoded coding blocks and displaying the coding blocks.
7. The ultra high definition video display apparatus of claim 6, wherein the receiving unit further comprises:
a receiving subunit, configured to receive coding information of each coding block included in each path of video data;
the processing subunit is configured to determine, according to the coding information, a second area range of the video picture corresponding to the coding block relative to the global picture;
the decoding unit includes:
a judging subunit, configured to judge whether the second area range of each of the coding blocks has an overlapping portion with the first area range;
a first decoding subunit, configured to decode the encoded block when the second region range of the decoded block has a coinciding portion with the first region range.
8. The ultra high definition video display apparatus according to claim 7,
the receiving subunit is configured to acquire, as conversion information, an area range of a picture corresponding to each path of video data relative to the global picture;
and the processing subunit is configured to determine, according to the conversion information and the coding information, a second region range of the video picture corresponding to the coding block relative to the global picture.
9. The ultra high definition video display apparatus of claim 6, wherein the decoding unit comprises:
a decoding determining unit, configured to determine the coding blocks that the boundary of the first region range passes through, and the coding blocks that are all located within the first region range;
and the second decoding subunit is used for decoding the coding blocks passed by the boundary of the first area range and the coding blocks which are all positioned in the first area range.
10. The apparatus according to claim 6, wherein the encoding determining unit is further configured to determine the segmentation method according to complexity of pictures in the video pictures corresponding to each path of video data.
CN201910874253.4A 2019-09-17 2019-09-17 Ultrahigh-definition video display method and device Pending CN110602398A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910874253.4A CN110602398A (en) 2019-09-17 2019-09-17 Ultrahigh-definition video display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910874253.4A CN110602398A (en) 2019-09-17 2019-09-17 Ultrahigh-definition video display method and device

Publications (1)

Publication Number Publication Date
CN110602398A true CN110602398A (en) 2019-12-20

Family

ID=68860323

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910874253.4A Pending CN110602398A (en) 2019-09-17 2019-09-17 Ultrahigh-definition video display method and device

Country Status (1)

Country Link
CN (1) CN110602398A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101453639A (en) * 2007-11-29 2009-06-10 展讯通信(上海)有限公司 Encoding, decoding method and system for supporting multi-path video stream of ROI region
CN102811347A (en) * 2011-05-30 2012-12-05 索尼公司 Image processing device, image processing method, and program
CN103905741A (en) * 2014-03-19 2014-07-02 合肥安达电子有限责任公司 Ultra-high-definition panoramic video real-time generation and multi-channel synchronous play system
CN104410857A (en) * 2014-12-26 2015-03-11 广东威创视讯科技股份有限公司 Image display control method and related device
US20170024920A1 (en) * 2014-05-09 2017-01-26 Huawei Technologies Co., Ltd. Method and Related Apparatus for Capturing and Processing Image Data
CN109168032A (en) * 2018-11-12 2019-01-08 广州酷狗计算机科技有限公司 Processing method, terminal, server and the storage medium of video data

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101453639A (en) * 2007-11-29 2009-06-10 展讯通信(上海)有限公司 Encoding, decoding method and system for supporting multi-path video stream of ROI region
CN102811347A (en) * 2011-05-30 2012-12-05 索尼公司 Image processing device, image processing method, and program
CN103905741A (en) * 2014-03-19 2014-07-02 合肥安达电子有限责任公司 Ultra-high-definition panoramic video real-time generation and multi-channel synchronous play system
US20170024920A1 (en) * 2014-05-09 2017-01-26 Huawei Technologies Co., Ltd. Method and Related Apparatus for Capturing and Processing Image Data
CN104410857A (en) * 2014-12-26 2015-03-11 广东威创视讯科技股份有限公司 Image display control method and related device
CN109168032A (en) * 2018-11-12 2019-01-08 广州酷狗计算机科技有限公司 Processing method, terminal, server and the storage medium of video data

Similar Documents

Publication Publication Date Title
US10909656B2 (en) Method and apparatus of image formation and compression of cubic images for 360 degree panorama display
US20200288099A1 (en) Video generating method, apparatus, medium, and terminal
CN109983500B (en) Flat panel projection of reprojected panoramic video pictures for rendering by an application
US20130182184A1 (en) Video background inpainting
US20040136689A1 (en) Method and apparatus for editing images, and method and apparatus for reproducing the edited images
US9501815B2 (en) Processing panoramic pictures
US11138460B2 (en) Image processing method and apparatus
CN107426491B (en) Implementation method of 360-degree panoramic video
WO2018001208A1 (en) Encoding and decoding method and device
US20090303338A1 (en) Detailed display of portion of interest of areas represented by image frames of a video signal
CN105072353B (en) A kind of image decoding based on more GPU spells prosecutor method
CN111669518A (en) Multi-angle free visual angle interaction method and device, medium, terminal and equipment
CN111669567A (en) Multi-angle free visual angle video data generation method and device, medium and server
CN111669561A (en) Multi-angle free visual angle image data processing method and device, medium and equipment
CN107948547B (en) Processing method and device for panoramic video stitching and electronic equipment
CN111510643B (en) System and method for splicing panoramic image and close-up image
CN114007059A (en) Video compression method, decompression method, device, electronic equipment and storage medium
CN111510717B (en) Image splicing method and device
CN110933461B (en) Image processing method, device, system, network equipment, terminal and storage medium
CN110602398A (en) Ultrahigh-definition video display method and device
CN111669569A (en) Video generation method and device, medium and terminal
CN111669604A (en) Acquisition equipment setting method and device, terminal, acquisition system and equipment
CN111669603B (en) Multi-angle free visual angle data processing method and device, medium, terminal and equipment
CN111669570A (en) Multi-angle free visual angle video data processing method and device, medium and equipment
CN111669568A (en) Multi-angle free visual angle interaction method and device, medium, terminal and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211110

Address after: 518000 409, Yuanhua complex building, 51 Liyuan Road, merchants street, Nanshan District, Shenzhen, Guangdong

Applicant after: Shenzhen zhuohe Technology Co.,Ltd.

Address before: No. 2501-1, 25 / F, block D, Tsinghua Tongfang science and technology building, No. 1 courtyard, Wangzhuang Road, Haidian District, Beijing 100083

Applicant before: Beijing Zhuohe Technology Co.,Ltd.

TA01 Transfer of patent application right
RJ01 Rejection of invention patent application after publication

Application publication date: 20191220

RJ01 Rejection of invention patent application after publication