CN113395564B - Image display method, device and equipment - Google Patents

Image display method, device and equipment Download PDF

Info

Publication number
CN113395564B
CN113395564B CN202110157083.5A CN202110157083A CN113395564B CN 113395564 B CN113395564 B CN 113395564B CN 202110157083 A CN202110157083 A CN 202110157083A CN 113395564 B CN113395564 B CN 113395564B
Authority
CN
China
Prior art keywords
image
sub
target sub
displayed
output node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110157083.5A
Other languages
Chinese (zh)
Other versions
CN113395564A (en
Inventor
雷日勇
黄玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Publication of CN113395564A publication Critical patent/CN113395564A/en
Application granted granted Critical
Publication of CN113395564B publication Critical patent/CN113395564B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1446Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The application provides an image display method, device and equipment, wherein the method comprises the following steps: acquiring attribute information of a target sub-image; the attribute information of the target sub-image comprises image display position information and a sub-image identification of the target sub-image; acquiring a target sub-image corresponding to the sub-image identification; the target sub-image is a sub-image in an image to be displayed, and the image to be displayed is divided into M x N sub-images; and displaying the target sub-image according to the image display position information. According to the technical scheme, network bandwidth can be saved and processing resources of the output node can be saved under a high-resolution/ultrahigh-resolution scene.

Description

Image display method, device and equipment
Technical Field
The application relates to the field of video monitoring, in particular to an image display method, device and equipment.
Background
With the rapid development of video technology, the resolution of images is higher and higher, and in order to display images with high resolution, a tiled display system can be used for displaying the images. A tiled display system is a display system implemented by hardware and software, capable of displaying high-resolution/ultra-high-resolution images, and is typically composed of a plurality of display elements. The tiled display system has been widely used as a modern video tool in various fields such as broadcast television broadcasting systems, communication network management systems, and the like.
In order to realize the display of the high-resolution image, the high-resolution image needs to be transmitted to each display unit of the tiled display system, and the sub-images of the high-resolution image are displayed by the display unit. For example, the display unit a displays the sub-image 1 and the sub-image 2 of the high resolution image on the display device, and the display unit b displays the sub-image 3 and the sub-image 4 of the high resolution image on the display device. Obviously, by the cooperation of the display unit a and the display unit b, a complete high-resolution image can be displayed on the display device.
In the above manner, each display unit displays only a partial sub-image, but the high resolution image itself is transmitted to the display unit, and therefore, network bandwidth is wasted and processing resources of the display unit are wasted.
Disclosure of Invention
In view of this, the present application provides an image display method, the method including:
acquiring attribute information of a target sub-image; wherein the attribute information of the target sub-image comprises image display position information and a sub-image identification of the target sub-image;
acquiring a target sub-image corresponding to the sub-image identification; the target sub-image is a sub-image in an image to be displayed, the image to be displayed is divided into M x N sub-images, and M and N are positive integers;
And displaying the target sub-image according to the image display position information.
The application provides an image display method, which comprises the following steps:
obtaining a segmentation mode of an image to be displayed, wherein the segmentation mode represents that the image to be displayed is segmented into M x N sub-images; wherein M is a positive integer, and N is a positive integer;
determining image display position information of a target sub-image in the image to be displayed and an output node for processing the target sub-image according to the segmentation mode;
and transmitting the attribute information of the target sub-image to the output node, wherein the attribute information comprises the image display position information and a sub-image identifier of the target sub-image, so that the output node displays the target sub-image corresponding to the sub-image identifier according to the image display position information.
The application provides an image display method, which comprises the following steps:
acquiring an image to be displayed and a segmentation mode of the image to be displayed;
dividing the image to be displayed into M.N sub-images according to the dividing mode;
the segmentation mode is sent to a main control device, so that the main control device determines image display position information of a target sub-image and an output node for processing the target sub-image according to the segmentation mode, and sends attribute information of the target sub-image to the output node, wherein the attribute information comprises the image display position information and a sub-image identifier of the target sub-image;
And transmitting an encoded bit stream for the target sub-image to the output node, so that the output node obtains the target sub-image corresponding to the sub-image identification according to the encoded bit stream, and displays the target sub-image according to the image display position information.
The present application provides an image display apparatus, the apparatus including:
the acquisition module is used for acquiring attribute information of the target sub-image; wherein the attribute information of the target sub-image comprises image display position information and a sub-image identification of the target sub-image; and acquiring a target sub-image corresponding to the sub-image identification; the target sub-image is a sub-image in an image to be displayed, the image to be displayed is divided into M x N sub-images, and M and N are positive integers;
and the display module is used for displaying the target sub-image according to the image display position information.
The present application provides an image display apparatus, the apparatus including:
the image display device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a segmentation mode of an image to be displayed, wherein the segmentation mode represents that the image to be displayed is segmented into M.N sub-images; wherein M is a positive integer, and N is a positive integer;
The determining module is used for determining the image display position information of the target sub-image and the output node for processing the target sub-image according to the dividing mode aiming at the target sub-image in the image to be displayed;
and the sending module is used for sending the attribute information of the target sub-image to the output node, wherein the attribute information comprises the image display position information and the sub-image identification of the target sub-image, so that the output node displays the target sub-image corresponding to the sub-image identification according to the image display position information.
The present application provides an output node comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine-executable instructions to perform the steps of:
acquiring attribute information of a target sub-image; wherein the attribute information of the target sub-image comprises image display position information and a sub-image identification of the target sub-image;
acquiring a target sub-image corresponding to the sub-image identification; the target sub-image is a sub-image in an image to be displayed, the image to be displayed is divided into M x N sub-images, and M and N are positive integers;
And displaying the target sub-image according to the image display position information.
The application provides a master control device, including: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine-executable instructions to perform the steps of:
obtaining a segmentation mode of an image to be displayed, wherein the segmentation mode represents that the image to be displayed is segmented into M x N sub-images; wherein M is a positive integer, and N is a positive integer;
determining image display position information of a target sub-image in the image to be displayed and an output node for processing the target sub-image according to the segmentation mode;
and transmitting the attribute information of the target sub-image to the output node, wherein the attribute information comprises the image display position information and a sub-image identifier of the target sub-image, so that the output node displays the target sub-image corresponding to the sub-image identifier according to the image display position information.
As can be seen from the above technical solutions, in the embodiments of the present application, the output node may obtain a target sub-image of an image to be displayed, and display the target sub-image of the image to be displayed, so that in a high resolution/ultra-high resolution scenario, network bandwidth can be saved, and processing resources of the output node can be saved. For example, the target sub-image of the image to be displayed is transmitted to the output node, instead of the image to be displayed (i.e., the high resolution image) itself, thereby conserving network bandwidth. The output node decodes the target sub-image of the image to be displayed instead of the image to be displayed itself, thereby saving processing resources (i.e., decoding resources) of the output node.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will briefly describe the drawings that are required to be used in the embodiments of the present application or the description in the prior art, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may also be obtained according to these drawings of the embodiments of the present application for a person having ordinary skill in the art.
FIG. 1 is a flow chart of an image display method in one embodiment of the present application;
FIGS. 2A-2C are schematic illustrations of image segmentation in one embodiment of the present application;
FIGS. 2D-2F are schematic diagrams of window partitioning in one embodiment of the present application;
FIG. 3 is a flow chart of an image display method in another embodiment of the present application;
FIG. 4 is a flow chart of an image display method in another embodiment of the present application;
FIG. 5A is a schematic view of an application scenario of a multi-track local source in one embodiment of the present application;
FIG. 5B is a flowchart of a method for displaying images of a multi-track local source in one embodiment of the present application;
FIG. 6A is a schematic illustration of an application scenario of a multi-track network source in one embodiment of the present application;
FIG. 6B is a flow chart of a method of displaying images of a multi-track network source in one embodiment of the present application;
fig. 7A and 7B are block diagrams of an image display device in an embodiment of the present application;
FIG. 8A is a block diagram of an output node in one embodiment of the present application;
fig. 8B is a block diagram of a master device in one embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to any or all possible combinations including one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present application to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, a first message may also be referred to as a second message, and similarly, a second message may also be referred to as a first message, without departing from the scope of the present application. Depending on the context, furthermore, the word "if" used may be interpreted as "at … …" or "at … …" or "in response to a determination".
An embodiment of the present application provides an image display method, which may be applied to a master control device, and is shown in fig. 1, which is a schematic flow chart of the image display method, and the method may include:
step 101, obtaining a segmentation method of the image to be displayed, where the segmentation method may represent that the image to be displayed is segmented into m×n sub-images, and for example, M may be a positive integer, and N may be a positive integer.
M x N sub-images may refer to: the method comprises the steps of dividing an image to be displayed into M sub-images in the width direction of the image to be displayed, and dividing the image to be displayed into N sub-images in the height direction of the image to be displayed. Or dividing the image to be displayed into M sub-images in the height direction of the image to be displayed, and dividing the image to be displayed into N sub-images in the width direction of the image to be displayed. For convenience of description, in the following embodiments, the image to be displayed is divided into M sub-images in the width direction of the image to be displayed, and the image to be displayed is divided into N sub-images in the height direction of the image to be displayed.
For example, the image to be displayed may be a high resolution/ultra high resolution image, and when the image to be displayed is displayed, a sub-image of the image to be displayed may be displayed. Based on this, the image to be displayed may be divided into m×n sub-images, and how to divide the image to be displayed into m×n sub-images is indicated by a division manner, where M and N are both positive integers, and M and N may be the same or different.
For example, when the division manner indicates that the image to be displayed is divided into 3*3 sub-images, as shown in fig. 2A, the image to be displayed may be divided into 9 sub-images, which are sub-image a1 to sub-image a9, respectively.
For another example, when the division manner indicates that the image to be displayed is divided into 2×4 sub-images, as shown in fig. 2B, the image to be displayed may be divided into 8 sub-images, which are sub-image a 1-sub-image a8, respectively.
For another example, when the division manner means that the image to be displayed is divided into 4*2 sub-images, as shown in fig. 2C, the image to be displayed may be divided into 8 sub-images, which are sub-image a 1-sub-image a8, respectively.
Of course, the above are just a few examples of the division manner, and the division manner is not limited thereto.
In one possible implementation, the master device may obtain from the input node a segmentation of the image to be displayed for the local source. For example, in an application scenario of a local source (introduction of the local source is referred to in the subsequent embodiment), the input node may acquire a splitting manner of an image to be displayed, and split the image to be displayed into m×n sub-images according to the splitting manner, and on the basis of this, the master device may send a query message to the input node, so that after receiving the query message, the input node sends the splitting manner to the master device.
In another possible implementation, the master device may obtain a segmentation of the image to be displayed for the network source from the remote device. For example, in an application scenario of a network source (for introduction of the network source, see the subsequent embodiments), the remote device may obtain a segmentation mode of an image to be displayed, and encode the image to be displayed according to the segmentation mode, to obtain an encoded bitstream. For example, if the division manner indicates that the image to be displayed is divided into m×n sub-images, the encoded bitstream of the image to be displayed may include contents corresponding to the m×n sub-images, respectively. On the basis, the master control device can send a query message to the remote device, so that the remote device sends the segmentation mode to the master control device after receiving the query message.
Step 102, determining image display position information of the target sub-image and an output node for processing the target sub-image according to the segmentation mode aiming at the target sub-image in the image to be displayed. For example, the target sub-image may be any one of the sub-images to be displayed, that is, for each sub-image, the image display position information of the sub-image and the output node that processes the sub-image may be determined according to the division manner.
In one possible implementation, the display area of the target sub-image may be determined according to the segmentation. If there is an overlapping area between the display area of the display unit and the display area of the target sub-image, the output node corresponding to the display unit may be determined as an output node for processing the target sub-image, and the image display position information of the target sub-image may be determined according to the overlapping area. Further, after determining the overlapping area, the segmentation information of the target sub-image may also be determined according to the overlapping area.
For example, the image display window of the image to be displayed may be divided into m×n display areas according to the dividing manner, and illustrated in fig. 2A, the dividing manner is shown as dividing the image to be displayed into 3*3 sub-images, and the image display window of the image to be displayed may be divided into 3*3 display areas. Referring to fig. 2D, the main control device may learn coordinate information of the image display window and divide the image display window into 3*3 display areas, the display areas and the sub-images may be in one-to-one correspondence, and the size of the sub-images is the same as the size of the display areas. For example, the display area 1 in fig. 2D corresponds to the sub-image a1 of fig. 2A, the display area 2 corresponds to the sub-image a2, and the like, the display area 9 corresponds to the sub-image a 9.
In summary, for each target sub-image, the display area of the target sub-image may be determined, for example, if the target sub-image is sub-image a1, the display area of sub-image a1 is display area 1, and so on.
For example, a plurality of display units and a plurality of output nodes may be disposed, and the display units are connected to the output nodes. One output node may be connected to only one display unit, or may be connected to two or more display units, without limitation. For example, referring to fig. 2D, taking deployment of display unit a, display unit b, display unit c, and display unit D as examples, assuming that one output node connects two display units, display unit a and display unit b are connected to output node 1, and display unit c and display unit D are connected to output node 2; alternatively, display unit a and display unit c are connected to output node 1, and display unit b and display unit d are connected to output node 2; alternatively, the display units b and c are connected to the output node 1, and the display units a and d are connected to the output node 2, and the connection method is not limited.
Referring to fig. 2D, each display unit has its own corresponding display area, i.e., the display unit needs to display an image in the display area corresponding to the present display unit. For example, the display area of the display unit a is the display area a, that is, the display unit a performs image display in the display area a. The display area of the display unit B is a display area B, that is, the display unit B performs image display in the display area B. The display area of the display unit C is a display area C, that is, the display unit C performs image display in the display area C. The display area of the display unit D is a display area D, that is, the display unit D performs image display in the display area D.
Since the main control device can learn the display area of the target sub-image and the display area of the display unit, it can be determined whether or not there is an overlapping area between the display area of the display unit and the display area of the target sub-image. Referring to fig. 2E, there is an overlap region s1 between the display region a of the display unit a and the display region 1 of the sub-image a1, and thus, the output node corresponding to the display unit a may be determined as an output node for processing the sub-image a 1. Since there is an overlap region s2 between the display region C of the display unit C and the display region 1 of the sub-image a1, the output node corresponding to the display unit C can be determined as the output node for processing the sub-image a 1. The implementation of the other sub-images is similar to sub-image a 1.
Illustratively, since there is an overlapping region s1 between the display region a of the display unit a and the display region 1 of the sub-image a1, image display position information of the sub-image a1, which may be position information of the overlapping region s1, may be determined according to the overlapping region s 1. For example, the upper left point coordinates (or upper right point coordinates, or lower left point coordinates, or lower right point coordinates) of the overlap region s1, the width and the height of the overlap region s 1. Alternatively, the upper left point coordinates, the upper right point coordinates, the lower left point coordinates, and the lower right point coordinates of the overlap region s 1. Of course, the above is merely an example of image display position information, and is not limited thereto.
Since the overlapping area s2 exists between the display area C of the display unit C and the display area 1 of the sub-image a1, the image display position information of the sub-image a1 may be determined according to the overlapping area s2, and the image display position information may be the position information of the overlapping area s2, and there is no limitation on the position information.
For example, since there is an overlapping area s1 between the display area a of the display unit a and the display area 1 of the sub-image a1, the division information of the sub-image a1 is determined according to the overlapping area s1, and as shown in fig. 2F, the division information of the sub-image a1 may be a K point coordinate and a Q point coordinate, which means that the sub-image a1 is divided along a line between the K point and the Q point. Since there is an overlapping area s2 between the display area C of the display unit C and the display area 1 of the sub-image a1, the division information of the sub-image a1 is determined according to the overlapping area s2, and as shown in fig. 2F, the division information of the sub-image a1 may be a K-point coordinate and a Q-point coordinate.
In summary, for each target sub-image, the image display position information of the target sub-image, the output node for processing the target sub-image, and the division information of the target sub-image can be obtained.
And step 103, transmitting attribute information of the target sub-image to an output node, wherein the attribute information comprises image display position information of the target sub-image and a sub-image identifier of the target sub-image, so that the output node displays the target sub-image corresponding to the sub-image identifier according to the image display position information.
For example, referring to fig. 2F, since there is an overlapping area s1 between the display area a of the display unit a and the display area 1 of the sub-image a1, attribute information of the sub-image a1, which includes image display position information of the sub-image a1 (i.e., position information of the overlapping area s 1) and sub-image identification of the sub-image a1, may be transmitted to the output node corresponding to the display unit a. In this way, after receiving the attribute information, the output node may display the sub-image a1 corresponding to the sub-image identifier according to the position information of the overlapping area s1, and the displaying process of the sub-image a1 may refer to the subsequent embodiment, which is not described herein.
When the attribute information of the sub-image a1 is transmitted to the output node corresponding to the display unit a, the attribute information of the sub-image a1 may further include division information of the sub-image a1, for example, when the division information of the sub-image a1 is determined according to the overlapping region s1, the division information may be a K-point coordinate and a Q-point coordinate.
For example, referring to fig. 2F, since there is an overlapping area s2 between the display area C of the display unit C and the display area 1 of the sub-image a1, attribute information of the sub-image a1, which includes image display position information of the sub-image a1 (i.e., position information of the overlapping area s 2) and sub-image identification of the sub-image a1, may be transmitted to the output node corresponding to the display unit C. In this way, after receiving the attribute information, the output node may display the sub-image a1 corresponding to the sub-image identifier according to the position information of the overlapping area s2, and the displaying process of the sub-image a1 may refer to the subsequent embodiment, which is not described herein.
When the attribute information of the sub-image a1 is transmitted to the output node corresponding to the display unit c, the attribute information of the sub-image a1 may further include division information of the sub-image a1, for example, when the division information of the sub-image a1 is determined according to the overlapping region s2, the division information may be a K-point coordinate and a Q-point coordinate.
For example, the attribute information of the sub-image may not include the division information of the sub-image, for example, for the sub-image a4, the display areas 4 of the sub-image a4 are all located in the display area C of the display unit C, and when the attribute information of the sub-image a4 is sent to the output node corresponding to the display unit C, the attribute information of the sub-image a4 does not include the division information, that is, the sub-image a4 does not need to be divided, and the whole sub-image a4 is displayed.
As can be seen from the above technical solutions, in the embodiments of the present application, the output node may obtain a target sub-image of an image to be displayed, and display the target sub-image of the image to be displayed, so that in a high resolution/ultra-high resolution scenario, network bandwidth can be saved, and processing resources of the output node can be saved. For example, the target sub-image of the image to be displayed is transmitted to the output node, instead of the image to be displayed (i.e., the high resolution image) itself, thereby conserving network bandwidth. The output node decodes the target sub-image of the image to be displayed instead of the image to be displayed itself, thereby saving processing resources (i.e., decoding resources) of the output node.
An embodiment of the present application provides an image display method, which may be applied to an output node, and is shown in fig. 3, which is a schematic flow chart of the image display method, and the method may include:
step 301, obtaining attribute information of a target sub-image; wherein the attribute information of the target sub-image may include, but is not limited to, image display position information and sub-image identification of the target sub-image.
Referring to the above embodiment, the main control device may transmit the attribute information of the target sub-image to the output node, and thus the output node may acquire the attribute information of the target sub-image from the main control device.
For example, assuming that the output node corresponding to the display unit a is the output node 1, referring to the above embodiment, the main control device may transmit attribute information of the sub-image a1 to the output node 1, and the output node 1 may acquire the attribute information of the sub-image a1, where the attribute information includes image display position information of the sub-image a1 (i.e., position information of the overlapping region s 1) and sub-image identification of the sub-image a1. Assuming that the output node corresponding to the display unit c is the output node 2, the main control device may send the attribute information of the sub-image a1 to the output node 2, and the output node 2 may acquire the attribute information of the sub-image a1, where the attribute information includes the image display position information of the sub-image a1 (i.e., the position information of the overlapping region s 2) and the sub-image identifier of the sub-image a1.
Step 302, obtaining a target sub-image corresponding to the sub-image identification; the target sub-image is a sub-image in the image to be displayed, the image to be displayed is divided into M x N sub-images, and M and N are positive integers.
For example, since the attribute information of the sub-image a1 includes the sub-image identification of the sub-image a1, the output node 1/output node 2 can acquire the target sub-image corresponding to the sub-image identification, that is, the sub-image a1.
In a possible embodiment, the output node may obtain a first encoded bit stream for the sub-image identification from the input node, and decode the first encoded bit stream to obtain a target sub-image corresponding to the sub-image identification, i.e. the target sub-image of the image to be displayed. For example, the output node sends the sub-picture identification to the input node such that the input node sends the first encoded bit stream for the sub-picture identification to the output node, so that the output node can decode the first encoded bit stream to obtain the target sub-picture.
For example, referring to fig. 2A, the input node divides the image to be displayed into 9 sub-images, the 9 sub-images being sub-image a 1-sub-image a9, respectively. The input node may encode the sub-picture a1 to obtain a first encoded bit stream 1, encode the sub-picture a2 to obtain a first encoded bit stream 2, and so on.
After obtaining the attribute information of the sub-image a1, the output node 1 transmits the sub-image identification of the sub-image a1 to the input node, since the attribute information includes the sub-image identification of the sub-image a1. The input node may send the first coded bit stream 1 corresponding to the sub-picture identification to the output node 1. After receiving the first coded bit stream 1, the output node 1 decodes the first coded bit stream 1 to obtain a sub-image a1.
Similarly, after obtaining the attribute information of the sub-image a1, the output node 2 transmits the sub-image identifier of the sub-image a1 to the input node, because the attribute information includes the sub-image identifier of the sub-image a1. The input node may send the first coded bit stream 1 corresponding to the sub-picture identification to the output node 2. After receiving the first coded bit stream 1, the output node 2 decodes the first coded bit stream 1 to obtain a sub-image a1.
In another possible embodiment, the output node obtains a second encoded bitstream for the image to be displayed from the remote device, decodes the content in the second encoded bitstream corresponding to the sub-image identifier (i.e. decodes part of the content in the second encoded bitstream, instead of decoding all the content in the second encoded bitstream), resulting in a target sub-image corresponding to the sub-image identifier, i.e. the target sub-image of the image to be displayed.
For example, referring to fig. 2A, assuming that the image to be displayed needs to be divided into 9 sub-images, the 9 sub-images being sub-image a 1-sub-image a9, respectively, the remote device may encode the image to be displayed (note that here, the entirety of the image to be displayed is encoded, instead of encoding each sub-image separately), thereby obtaining a second encoded bitstream. In the second coded bit stream, the content corresponding to sub-image a1, the content corresponding to sub-image a2, and so on, the content corresponding to sub-image a9 may be included.
The output node 1 may send a request message to the remote device, which upon receiving the request message may send the second encoded bit stream to the output node 1. After obtaining the second encoded bitstream, the output node 1 may decode the content corresponding to the sub-image identifier in the second encoded bitstream to obtain the sub-image a1, because the attribute information of the sub-image a1 includes the sub-image identifier of the sub-image a1.
Similarly, the output node 2 may send a request message to the remote device, which may send the second encoded bit stream to the output node 2 after receiving the request message. After obtaining the second encoded bitstream, the output node 2 may decode the content corresponding to the sub-image identifier in the second encoded bitstream to obtain the sub-image a2, because the attribute information of the sub-image a1 includes the sub-image identifier of the sub-image a1.
And step 303, displaying the target sub-image according to the image display position information.
In one possible embodiment, the attribute information of the target sub-image may further include segmentation information of the target sub-image. Based on the above, the output node may divide the target sub-image into at least two data blocks according to the division information of the target sub-image, and select a target data block to be displayed from the at least two data blocks. Then, the output node displays the target data block according to the image display position information.
For example, in practical applications, if the target sub-image does not span the display units, i.e., the display area of the target sub-image is only located in the display area of one display unit, such as sub-image a4, the attribute information of the target sub-image may not include the segmentation information of the target sub-image. Based on this, the target sub-image does not need to be divided into at least two data blocks, but the entire target sub-image is displayed directly from the image display position information.
Referring to the above embodiment, referring to fig. 2F, the output node 1 may acquire attribute information of the sub-image a1, which includes image display position information of the sub-image a1 (i.e., position information of the overlapping region s 1), sub-image identification of the sub-image a1, and division information (e.g., K-point coordinates and Q-point coordinates) of the sub-image a1. Based on the sub-image identification of sub-image a1, output node 1 acquires sub-image a1. Based on the division information of the sub-image a1, the output node 1 divides the sub-image a1 into two data blocks, namely, a data block 1 corresponding to the overlap region s1 and a data block 2 corresponding to the overlap region s2, and selects the data block 1 corresponding to the image display position information (namely, the position information of the overlap region s 1), namely, the data block 1, from the data block 1 and the data block 2 as the target data block. Then, the output node 1 displays the data block 1 according to the position information of the overlap region s 1.
Similarly, the output node 2 may acquire attribute information of the sub-image a1, which includes image display position information of the sub-image a1 (i.e., position information of the overlapping region s 2), sub-image identification of the sub-image a1, and division information (e.g., K-point coordinates and Q-point coordinates) of the sub-image a1. Based on the sub-image identification of sub-image a1, output node 2 acquires sub-image a1. Based on the division information of the sub-image a1, the output node 2 divides the sub-image a1 into a data block 1 corresponding to the overlap region s1 and a data block 2 corresponding to the overlap region s2, and selects the data block 2 corresponding to the image display position information (i.e., the position information of the overlap region s 2) from the data block 1 and the data block 2. The output node 2 displays the data block 2 based on the position information of the overlap region s 2.
So far, the output node 1 displays the data block 1 in the sub-image a1 in the overlapping area s1, and the output node 2 successfully displays the data block 2 in the sub-image a1 in the overlapping area s2, namely, the display of the sub-image a1 is completed.
In the above embodiment, the display process of the sub-image a1 is described, and the display processes of the sub-image a2, the sub-image a3, the sub-image a5, and the sub-image a8 are similar to the display process of the sub-image a1, and are not described here again.
Referring to fig. 2D, attribute information of the sub-image a4 is transmitted to the output node 2 corresponding to the display unit C, the attribute information including image display position information of the sub-image a4, the sub-image identification of the sub-image a4, and division information of the sub-image a4, and since the sub-image a4 is located only in the display area C, the division information of the sub-image a4 is empty. The output node 2 acquires the sub-image a4 corresponding to the sub-image identification, and displays the sub-image a4 directly based on the image display position information without dividing the sub-image a4. The display process of the sub-image a6, the sub-image a7 and the sub-image a9 is similar to the display process of the sub-image a4, and will not be described again.
In one possible implementation, displaying the target sub-image according to the image display position information may include, but is not limited to: the output node determines an image display window of the target sub-image according to the image display position information; the target sub-image is displayed in the image display window based on the time stamp of the target sub-image.
For example, the image display position information (taking the position information of the overlapping region s1 as an example) includes the upper left dot coordinates of the overlapping region s1, the width and the height of the overlapping region s1, and based on the above position information, the output node 1 can determine the image display window of the sub-image a1. Then, the output node 1 displays the sub-image a1 in the image display window, and when the sub-image a1 is displayed, it is also possible to determine a time stamp of the sub-image a1 indicating at which time the sub-image a1 is displayed, and therefore, the output node 1 displays the sub-image a1 at that time.
As can be seen from the above technical solutions, in the embodiments of the present application, the output node may obtain a target sub-image of an image to be displayed, and display the target sub-image of the image to be displayed, so that in a high resolution/ultra-high resolution scenario, network bandwidth can be saved, and processing resources of the output node can be saved. For example, the target sub-image of the image to be displayed is transmitted to the output node, instead of the image to be displayed (i.e., the high resolution image) itself, thereby conserving network bandwidth. The output node decodes the target sub-image of the image to be displayed instead of the image to be displayed itself, thereby saving processing resources (i.e., decoding resources) of the output node.
An embodiment of the present application provides an image display method, which may be applied to an input node, and is shown in fig. 4, which is a schematic flow chart of the image display method, and the method may include:
step 401, obtaining an image to be displayed and a segmentation mode of the image to be displayed.
Step 402, dividing the image to be displayed into m×n sub-images according to the dividing method.
For example, the image to be displayed may be a high-resolution/ultra-high-resolution image, the input node may divide the image to be displayed into m×n sub-images, and the dividing manner is adopted to indicate how to divide the image to be displayed into m×n sub-images, so that the input node may divide the image to be displayed into m×n sub-images according to the dividing manner, where M and N are both positive integers, and M and N may be the same or different.
In one possible implementation, the input node may determine the resolution of the image to be displayed, and if the resolution is greater than a preset resolution threshold (the preset resolution threshold may be empirically configured, and when the resolution is greater than the preset resolution threshold, the input node may divide the image to be displayed into m×n sub-images according to the dividing manner.
Step 403, the segmentation mode is sent to the main control equipment, so that the main control equipment determines the image display position information of the target sub-image and the output node for processing the target sub-image according to the segmentation mode, and sends the attribute information of the target sub-image to the output node; for example, the attribute information may include image display position information of the target sub-image and sub-image identification of the target sub-image.
And step 404, transmitting the coded bit stream for the target sub-image to the output node, so that the output node obtains the target sub-image corresponding to the sub-image identification according to the coded bit stream, and displaying the target sub-image according to the image display position information.
For example, after the input node sends the splitting manner to the master device, the processing procedure of the master device may refer to the foregoing embodiment, which is not described herein. After the input node sends the encoded bit stream to the output node, the processing procedure of the output node may refer to the above embodiment, which is not described herein.
The above technical solutions of the embodiments of the present application are described below with reference to specific application scenarios. Before describing the technical scheme of the application, the following concepts related to the technical scheme of the application are described:
local source: the input node is used as a local source, the image to be displayed can be sent to the input node, and the input node encodes the image to be displayed to obtain an encoded bit stream. The coded bit stream is transmitted to an output node through a network, and the output node decodes the coded bit stream to obtain an image to be displayed. And the output node performs processing such as splicing display and the like on the image to be displayed. The above process may be referred to as a local source process.
Multi-track local source: the input node is used as a local source, and can send the image to be displayed to the input node, if the resolution of the image to be displayed is greater than a certain threshold, the input node can divide the image to be displayed, for example, divide the image to be displayed into m×n sub-images, and independently encode each sub-image to obtain the encoding bit stream of each sub-image. The output node acquires one or more coding bit streams from all coding bit streams as required, decodes the acquired coding bit streams to obtain sub-images of the images to be displayed, and performs processing such as splicing display on the sub-images. The above process may be referred to as a multi-track local source process.
Network source: remote devices such as IPC (IP Camera), DVR (Network Video Recorder ) and the like output the encoded bitstream as network sources. The output node obtains the coded bit stream from the remote equipment, decodes the coded bit stream to obtain an image to be displayed, and performs processing such as splicing display on the image to be displayed. The above-described process may be referred to as a network source process.
Multi-track network source: the remote device acts as a network source and outputs a coded bit stream comprising sub-streams of a plurality of sub-images (one track per sub-stream of each sub-image), each sub-stream of a sub-image being independently decodable, but the output node is unable to obtain the sub-streams of the sub-images independently. The output node obtains the coded bit stream (i.e. the sub-code streams of a plurality of sub-images) from the remote equipment, and can decode one or more sub-code streams in the coded bit stream as required to obtain the sub-images of the image to be displayed, and perform processing such as splicing display on the sub-images. The above process may be referred to as a multi-track network source process.
Decoding and splicing: when the display window of the image to be displayed spans a plurality of output nodes, the output nodes respectively decode the coded bit stream and ensure that a complete picture with low time delay is displayed at the display end.
Application scenario 1: for an application scenario of a multi-track local source, referring to fig. 5A, a network structure schematic diagram of the application scenario is shown, where a distributed system may include a master control device, a network switch, input nodes and output nodes, where the number of input nodes may be at least one, and the number of output nodes may be multiple. Each output node may be connected to at least one display unit, in the figure 2 display units are shown as an example.
The main control device is responsible for managing all input nodes and all output nodes, and can add/delete input nodes in the distributed system and output nodes in the distributed system. The main control equipment is also responsible for the state management of the input nodes, the state management of the output nodes, the layer information management, the communication interaction with user entries such as clients/web and the like, and the functions of the main control equipment are not limited.
The network switch is used as network equipment for independent configuration management, and provides a network communication foundation for the main control equipment, the input nodes and the output nodes. For example, the communication process between the master control device and the input node can be transferred through the network switch; the communication process between the main control equipment and the output node can be transferred through the network switch; the communication process between the input node and the output node can be transferred through the network switch.
The input node is responsible for encoding a received image to be displayed (i.e. an image to be displayed from a signal source), and may also segment the image to be displayed when the resolution of the image to be displayed is greater than a threshold.
The output node is responsible for decoding the coded bit stream, performing processing such as splicing display and the like, and after decoding the coded bit stream, the output node can obtain an image to be displayed and outputs the image to be displayed to the display unit of the splicing display system so that the display unit displays the image to be displayed.
In the above application scenario, referring to fig. 5B, a flowchart of an image display method is shown.
In step 511, the input node acquires the image to be displayed and the segmentation method of the image to be displayed.
For example, an image to be displayed may be input to the input node, and the image to be displayed may be a high resolution/ultra-high resolution image, such as a 4K resolution image or a higher resolution image, or the like.
For example, a local 4K ultra-high definition signal may be connected to the input node, and on this basis, an image to be displayed with a resolution of 4K may be input to the input node, so that the input node obtains the image to be displayed.
For another example, when the input node is connected to the super-resolution signal (for example, a 32K signal), the super-resolution signal may be divided into a plurality of small super-resolution signals (for example, a 4K super-resolution signal), and the divided 4K super-resolution signal is connected to the input node, based on which an image to be displayed with 4K resolution may be input to the input node, so that the input node obtains the image to be displayed. For example, with distributed management of input nodes, an unlimited number of input nodes may be supported, such as 8 input nodes may be deployed. When a 32K super-resolution signal is required to be accessed, the 32K super-resolution signal can be divided into 8 4K super-high-definition signals, the 1 st 4K super-high-definition signal is input to the input node 1, the 2 nd 4K super-high-definition signal is input to the input node 2, and the like, so that each input node can obtain a 4K resolution image to be displayed.
Of course, the foregoing is merely an example, and the input node is not limited thereto, as long as the input node can obtain the image to be displayed, and the resolution of the image to be displayed may be 4K, may be greater than 4K, or may be less than 4K.
For convenience of description, in this application scenario, a processing procedure of one input node for an image to be displayed is taken as an example, and processing procedures of other input nodes for the image to be displayed are similar, which will not be described in detail later.
For example, the input node may acquire a division manner of the image to be displayed, for example, a division manner may be configured in advance at the input node, where the division manner represents dividing the image to be displayed into m×n sub-images, and on the basis of this, the input node determines the division manner configured in advance as the division manner of the image to be displayed.
In step 512, the input node divides the image to be displayed into m×n sub-images according to the dividing method.
For example, the division manner refers to dividing the image to be displayed into m×n sub-images, where M and N are positive integers, and M and N may be the same or different. Based on this, the input node may divide the image to be displayed into m×n sub-images based on the division. For example, the image to be displayed is divided into 3*3 sub-images according to the division manner, as shown in fig. 2A, or divided into 2×4 sub-images according to the division manner, as shown in fig. 2B, or divided into 4*2 sub-images according to the division manner, as shown in fig. 2C. Of course, the foregoing is merely a few examples and is not limiting in this regard.
In one possible implementation, the input node may further determine a resolution of the image to be displayed, and if the resolution is greater than a preset resolution threshold (e.g. 3K, 3.5K, etc.), the input node may divide the image to be displayed into m×n sub-images according to the dividing manner. If the resolution is not greater than the predetermined resolution threshold, the input node does not need to divide the image to be displayed into m×n sub-images. For convenience of description, the input node will be used to divide the image to be displayed into m×n sub-images.
In step 513, the input node sends the split to the master device.
For example, the master device may send a query message to the input node, such that the input node sends the split pattern to the master device after receiving the query message.
In step 514, the master device receives the segmentation method and stores the segmentation method.
For example, the division manner of the image to be displayed may also be referred to as a multi-track layout manner of the image to be displayed.
In step 515, the main control device divides the image display window of the image to be displayed into m×n display areas according to the dividing manner, and determines the display area of each sub-image, where the display areas may be in one-to-one correspondence with the sub-images. For example, the display area 1 in fig. 2D corresponds to the sub-image a1 of fig. 2A, the display area 2 corresponds to the sub-image a2, and the like, the display area 9 corresponds to the sub-image a 9.
For example, after receiving the display command for the image to be displayed, the master control device may divide the image display window of the image to be displayed into m×n display areas according to the dividing manner.
For example, when the main control device divides the image display window of the image to be displayed into m×n display areas according to the dividing manner, the m×n display areas may be equal-scale display areas.
In step 516, for the target sub-image (i.e. any sub-image) of the image to be displayed, if there is an overlapping area between the display area of the display unit and the display area of the target sub-image, the main control device determines the output node corresponding to the display unit as the output node for processing the target sub-image, and determines the image display position information of the target sub-image and the segmentation information of the target sub-image according to the overlapping area.
In step 517, the master device sends attribute information of the target sub-image to the output node (i.e. the output node that processes the target sub-image), where the attribute information includes image display position information of the target sub-image, sub-image identifier of the target sub-image, and segmentation information of the target sub-image.
In step 518, the output node obtains attribute information of the target sub-image, where the attribute information includes image display position information of the target sub-image, sub-image identifier of the target sub-image, and segmentation information of the target sub-image.
The output node obtains the first coded bit stream identified for the sub-picture from the input node, step 519.
For example, referring to fig. 2A, after the input node divides the image to be displayed into 9 sub-images, the sub-image a1 is encoded to obtain a first encoded bit stream 1, the sub-image a2 is encoded to obtain a first encoded bit stream 2, and so on. The output node 1 acquires attribute information of the sub-image 1, the attribute information including a sub-image identification of the sub-image a1, and transmits the sub-image identification of the sub-image a1 to the input node. Based on this, the input node sends a first encoded bitstream 1 corresponding to the sub-image identification of sub-image a1 to the output node 1.
In step 520, the output node decodes the first encoded bit stream to obtain a target sub-image of the image to be displayed.
In step 521, the output node displays the target sub-image according to the image display position information of the target sub-window based on the time stamp of the target sub-image. For example, a display area of the target sub-image is determined according to the image display position information; the target sub-image is displayed in the display area based on the time stamp of the target sub-image.
For example, the input node may add a time stamp to the first encoded bitstream, and the output node may determine a display time of the target sub-image according to the time stamp, and then perform synchronization splicing at the display time.
Application scenario 2: for an application scenario for a multi-track network source, referring to fig. 6A, a network structure schematic diagram of the application scenario is shown, where a distributed system may include a master control device, a network switch, a remote device, and output nodes, and the number of the output nodes may be multiple. The remote device serves as a network source and can be an IPC, a DVR, etc. In the application scene, decoding splicing display of the ultra-high-definition camera is supported, namely the remote equipment can be the ultra-high-definition camera, namely the image acquired by the ultra-high-definition camera is displayed.
In the above application scenario, referring to fig. 6B, a flowchart of an image display method is shown.
In step 611, the master device determines the segmentation method of the image to be displayed.
For example, the remote device may obtain a segmentation method of the image to be displayed, and encode the image to be displayed according to the segmentation method, to obtain an encoded bitstream. For example, if the division manner indicates that the image to be displayed is divided into m×n sub-images, the encoded bitstream of the image to be displayed may include contents corresponding to the m×n sub-images, respectively. On the basis, the main control equipment can acquire a segmentation mode aiming at the image to be displayed from the remote equipment. For example, the master device may send a query message to the remote device to cause the remote device to send the split pattern to the master device after receiving the query message.
In step 612, the main control device divides the image display window of the image to be displayed into m×n display areas according to the dividing manner, and determines the display area of each sub-image, where the display areas and the sub-images may be in one-to-one correspondence. For example, after receiving the display command for the image to be displayed, the master control device may divide the image display window of the image to be displayed into m×n display areas according to the dividing manner.
In step 613, for the target sub-image (i.e. any sub-image) of the image to be displayed, if there is an overlapping area between the display area of the display unit and the display area of the target sub-image, the main control device determines the output node corresponding to the display unit as the output node for processing the target sub-image, and determines the image display position information of the target sub-image and the segmentation information of the target sub-image according to the overlapping area.
In step 614, the master device sends attribute information of the target sub-image to the output node (i.e. the output node that processes the target sub-image), where the attribute information includes image display position information of the target sub-image, sub-image identification of the target sub-image, and segmentation information of the target sub-image.
In step 615, the output node obtains attribute information of the target sub-image, where the attribute information includes image display position information of the target sub-image, sub-image identification of the target sub-image, and segmentation information of the target sub-image.
In step 616, the output node obtains a second encoded bitstream for the image to be displayed, and decodes the content corresponding to the sub-image identifier in the second encoded bitstream to obtain the target sub-image of the image to be displayed.
For example, after receiving the attribute information of the target sub-image, the output node may obtain a second encoded bitstream for the image to be displayed from a network source (such as a remote device), unlike the implementation process of the multi-track local source, where the output node obtains the second encoded bitstream from the network source, the second encoded bitstream is a bitstream for the image to be displayed, and not a bitstream for the sub-image. For example, assuming that the image to be displayed is divided into 9 sub-images, sub-image a 1-sub-image a9, respectively, the output node acquires a second encoded bitstream for the image to be displayed, the second encoded bitstream including contents corresponding to the sub-images a 1-a 9, respectively.
For example, assuming that the output node 1 obtains the attribute information of the sub-image 1, after obtaining the second encoded bit stream, the output node 1 may decode the content corresponding to the sub-image identifier (obtained from the attribute information) of the sub-image a1 in the second encoded bit stream, to obtain the sub-image a1 of the image to be displayed.
In step 617, the output node displays the target sub-image according to the image display position information of the target sub-window based on the time stamp of the target sub-image. For example, a display area of the target sub-image is determined according to the image display position information; the target sub-image is displayed in the display area based on the time stamp of the target sub-image.
In the above embodiments, the target sub-image may be displayed on a display window of the display device, for example, the output node sends the target sub-image and/or a data block of the target sub-image to the display unit, so that the display unit displays the target sub-image on the display window of the display device, and the type of the display device is not limited.
According to the technical schemes, the output node can obtain the sub-image of the image to be displayed and display the sub-image of the image to be displayed, so that the network bandwidth can be saved, and the processing resources of the output node can be saved. For example, the sub-image of the image to be displayed is transmitted to the output node, not the image to be displayed (i.e., the high resolution image) itself, so that the network bandwidth can be saved. For another example, the output node decodes the sub-image of the image to be displayed, instead of decoding the image to be displayed itself, so that the processing resources of the output node can be saved. The main control device may determine a target sub-window for displaying the sub-image, and transmit attribute information of the target sub-window to the output node, so that the output node displays the sub-image according to the attribute information of the target sub-window, thereby displaying the sub-image at a correct position. And by splicing the signal sources, a complete ultra-high definition signal picture can be displayed on the television wall. The decoding resource allocation is optimized in the mode, granularity of the allocated decoding resources is reduced by the multi-track decoding scheme, and the decoding resources are allocated only by considering output nodes spanned by independent sub-images, so that the decoding resources are utilized to the maximum. The method optimizes the network bandwidth performance, and the single-channel code stream of the ultra-high definition local source is divided into multiple-channel code stream transmission, so that the risk of single network port bandwidth bottleneck is reduced.
In the above technical solutions, the execution sequence is only an example given for convenience of description, and in practical application, the execution sequence between steps may also be changed, and the execution sequence is not limited. Moreover, in other embodiments, the steps of the corresponding methods need not be performed in the order shown and described herein, and the methods may include more or less steps than described herein. Furthermore, individual steps described in this specification, in other embodiments, may be described as being split into multiple steps; various steps described in this specification, in other embodiments, may be combined into a single step.
Based on the same application concept as the above method, an image display apparatus is further provided in this embodiment of the present application, as shown in fig. 7A, which is a structural diagram of the image display apparatus, and includes:
an acquisition module 711 for acquiring attribute information of the target sub-image; wherein the attribute information of the target sub-image comprises image display position information and a sub-image identification of the target sub-image; and acquiring a target sub-image corresponding to the sub-image identification; the target sub-image is a sub-image in an image to be displayed, the image to be displayed is divided into M x N sub-images, and M and N are positive integers;
And the display module 712 is configured to display the target sub-image according to the image display position information.
The acquiring module 711 is specifically configured to, when acquiring the target sub-image corresponding to the sub-image identifier: acquiring a first coded bit stream aiming at the sub-image identification from an input node, and decoding the first coded bit stream to obtain a target sub-image corresponding to the sub-image identification; or, obtaining a second coded bit stream for the image to be displayed from the remote equipment, and decoding the content corresponding to the sub-image identifier in the second coded bit stream to obtain a target sub-image corresponding to the sub-image identifier.
The attribute information of the target sub-image also comprises segmentation information of the target sub-image; the display module 712 is specifically configured to, when displaying the target sub-image according to the image display position information:
dividing the target sub-image into at least two data blocks according to the dividing information of the target sub-image, and selecting a target data block to be displayed from the at least two data blocks;
and displaying the target data block according to the image display position information.
The display module 712 is specifically configured to, when displaying the target sub-image according to the image display position information: determining an image display window of the target sub-image according to the image display position information;
And displaying the target sub-image on the image display window based on the time stamp of the target sub-image.
Based on the same application concept as the above method, an image display apparatus is further provided in this embodiment of the present application, as shown in fig. 7B, which is a structural diagram of the image display apparatus, and includes:
an obtaining module 721, configured to obtain a segmentation method of an image to be displayed, where the segmentation method represents segmentation of the image to be displayed into m×n sub-images; wherein M is a positive integer, and N is a positive integer; a determining module 722, configured to determine, for a target sub-image in an image to be displayed, image display position information of the target sub-image and an output node for processing the target sub-image according to the segmentation mode; and a sending module 723, configured to send attribute information of the target sub-image to the output node, where the attribute information includes the image display position information and a sub-image identifier of the target sub-image, so that the output node displays the target sub-image corresponding to the sub-image identifier according to the image display position information.
The acquiring module 721 is specifically configured to:
Obtaining a segmentation mode of an image to be displayed aiming at a local source from an input node; or alternatively, the process may be performed,
the segmentation mode of the image to be displayed for the network source is acquired from the remote device.
The determining module 722 is specifically configured to, when determining the image display position information of the target sub-image and the output node for processing the target sub-image according to the segmentation method:
determining a display area of the target sub-image according to the segmentation mode;
if the display area of the display unit and the display area of the target sub-image have an overlapping area, determining an output node corresponding to the display unit as an output node for processing the target sub-image;
and determining the image display position information of the target sub-image according to the overlapping area.
The determining module 722 is further configured to: determining segmentation information of the target sub-image according to the overlapping area; wherein, the attribute information of the target sub-image further comprises segmentation information of the target sub-image.
Based on the same application concept as the above method, an output node is further provided in the embodiment of the present application, and in terms of a hardware layer, a schematic diagram of a hardware architecture of the output node provided in the embodiment of the present application may be shown in fig. 8A. The output node may include: a processor 811 and a machine-readable storage medium 812, the machine-readable storage medium 812 storing machine-executable instructions executable by the processor 811; the processor 811 is configured to execute machine-executable instructions to implement the methods disclosed in the above examples of the present application. For example, the processor 811 is configured to execute machine executable instructions to implement the steps of:
Acquiring attribute information of a target sub-image; wherein the attribute information of the target sub-image comprises image display position information and a sub-image identification of the target sub-image;
acquiring a target sub-image corresponding to the sub-image identification; the target sub-image is a sub-image in an image to be displayed, the image to be displayed is divided into M x N sub-images, and M and N are positive integers;
and displaying the target sub-image according to the image display position information.
Based on the same application concept as the above method, the embodiment of the present application further provides a machine-readable storage medium, where the machine-readable storage medium stores a number of computer instructions, where the computer instructions can implement the method disclosed in the above example of the present application when executed by a processor.
For example, the computer instructions, when executed by a processor, can implement the steps of:
acquiring attribute information of a target sub-image; wherein the attribute information of the target sub-image comprises image display position information and a sub-image identification of the target sub-image;
acquiring a target sub-image corresponding to the sub-image identification; the target sub-image is a sub-image in an image to be displayed, the image to be displayed is divided into M x N sub-images, and M and N are positive integers;
And displaying the target sub-image according to the image display position information.
Based on the same application concept as the above method, a master control device is further provided in the embodiments of the present application, and from a hardware level, a schematic diagram of a hardware architecture of the master control device may be shown in fig. 8B. The master device may include: a processor 821 and a machine-readable storage medium 822, said machine-readable storage medium 822 storing machine-executable instructions executable by said processor 821; the processor 821 is configured to execute machine-executable instructions to implement the methods disclosed in the above examples of the present application. For example, the processor 821 is configured to execute machine-executable instructions to implement the following steps:
obtaining a segmentation mode of an image to be displayed, wherein the segmentation mode represents that the image to be displayed is segmented into M x N sub-images; wherein M is a positive integer, and N is a positive integer;
determining image display position information of a target sub-image in the image to be displayed and an output node for processing the target sub-image according to the segmentation mode;
and transmitting the attribute information of the target sub-image to the output node, wherein the attribute information comprises the image display position information and a sub-image identifier of the target sub-image, so that the output node displays the target sub-image corresponding to the sub-image identifier according to the image display position information.
Based on the same application concept as the above method, the embodiment of the present application further provides a machine-readable storage medium, where the machine-readable storage medium stores a number of computer instructions, where the computer instructions can implement the method disclosed in the above example of the present application when executed by a processor.
For example, the computer instructions, when executed by a processor, can implement the steps of:
obtaining a segmentation mode of an image to be displayed, wherein the segmentation mode represents that the image to be displayed is segmented into M x N sub-images; wherein M is a positive integer, and N is a positive integer;
determining image display position information of a target sub-image in the image to be displayed and an output node for processing the target sub-image according to the segmentation mode;
and transmitting the attribute information of the target sub-image to the output node, wherein the attribute information comprises the image display position information and a sub-image identifier of the target sub-image, so that the output node displays the target sub-image corresponding to the sub-image identifier according to the image display position information.
By way of example, the machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information, such as executable instructions, data, and the like. For example, a machine-readable storage medium may be: RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, a storage drive (e.g., hard drive), a solid state drive, any type of storage disk (e.g., optical disk, dvd, etc.), or a similar storage medium, or a combination thereof.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Moreover, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (12)

1. An image display method, the method comprising:
acquiring attribute information of a target sub-image; wherein the attribute information of the target sub-image comprises image display position information and a sub-image identification of the target sub-image;
acquiring a target sub-image corresponding to the sub-image identification; the target sub-image is a sub-image in an image to be displayed, the image to be displayed is divided into M x N sub-images, and M and N are positive integers;
Displaying the target sub-image according to the image display position information;
the main control equipment determines the image display position information of the target sub-image and an output node for processing the target sub-image according to the segmentation mode of the image to be displayed; the method specifically comprises the following steps: determining a display area of the target sub-image according to the segmentation mode; if the display area of the display unit and the display area of the target sub-image have an overlapping area, determining an output node corresponding to the display unit as an output node for processing the target sub-image; determining image display position information of the target sub-image according to the overlapping area;
the acquiring the target sub-image corresponding to the sub-image identification comprises: and sending the sub-image identification to an input node, acquiring a first coded bit stream aiming at the sub-image identification from the input node, and decoding the first coded bit stream to obtain a target sub-image corresponding to the sub-image identification.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the attribute information of the target sub-image also comprises segmentation information of the target sub-image;
The displaying the target sub-image according to the image display position information includes:
dividing the target sub-image into at least two data blocks according to the dividing information of the target sub-image, and selecting a target data block to be displayed from the at least two data blocks;
and displaying the target data block according to the image display position information.
3. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the displaying the target sub-image according to the image display position information includes:
determining an image display window of the target sub-image according to the image display position information;
and displaying the target sub-image on the image display window based on the time stamp of the target sub-image.
4. An image display method, the method comprising:
obtaining a segmentation mode of an image to be displayed, wherein the segmentation mode represents that the image to be displayed is segmented into M x N sub-images; wherein M is a positive integer, and N is a positive integer;
determining image display position information of a target sub-image in the image to be displayed and an output node for processing the target sub-image according to the segmentation mode; the method specifically comprises the following steps: determining a display area of the target sub-image according to the segmentation mode; if the display area of the display unit and the display area of the target sub-image have an overlapping area, determining an output node corresponding to the display unit as an output node for processing the target sub-image; determining image display position information of the target sub-image according to the overlapping area;
Transmitting attribute information of the target sub-image to the output node, wherein the attribute information comprises the image display position information and a sub-image identifier of the target sub-image, so that the output node displays the target sub-image corresponding to the sub-image identifier according to the image display position information;
the output node sends the sub-image identification to an input node, obtains a first coding bit stream aiming at the sub-image identification from the input node, and decodes the first coding bit stream to obtain a target sub-image corresponding to the sub-image identification.
5. The method of claim 4, wherein the step of determining the position of the first electrode is performed,
the method for acquiring the segmentation mode of the image to be displayed comprises the following steps:
obtaining a segmentation mode of an image to be displayed aiming at a local source from an input node; or alternatively, the process may be performed,
the segmentation mode of the image to be displayed for the network source is acquired from the remote device.
6. The method according to claim 4, wherein the method further comprises:
determining segmentation information of the target sub-image according to the overlapping area;
wherein, the attribute information of the target sub-image further comprises segmentation information of the target sub-image.
7. An image display method, the method comprising:
acquiring an image to be displayed and a segmentation mode of the image to be displayed;
dividing the image to be displayed into M.N sub-images according to the dividing mode;
the segmentation mode is sent to a main control device, so that the main control device determines image display position information of a target sub-image and an output node for processing the target sub-image according to the segmentation mode, and sends attribute information of the target sub-image to the output node, wherein the attribute information comprises the image display position information and a sub-image identifier of the target sub-image; the main control device determines image display position information of a target sub-image and an output node for processing the target sub-image according to the segmentation mode, and comprises: determining a display area of the target sub-image according to the segmentation mode; if the display area of the display unit and the display area of the target sub-image have an overlapping area, determining an output node corresponding to the display unit as an output node for processing the target sub-image; determining image display position information of the target sub-image according to the overlapping area;
Receiving the sub-image identification sent by the output node, sending a coded bit stream for the target sub-image to the output node based on the sub-image identification, so that the output node obtains the target sub-image corresponding to the sub-image identification according to the coded bit stream, and displaying the target sub-image according to the image display position information; the output node decodes the first coded bit stream to obtain a target sub-image corresponding to the sub-image identifier.
8. The method of claim 7, wherein the step of determining the position of the probe is performed,
the dividing the image to be displayed into m×n sub-images according to the dividing manner includes:
determining the resolution of the image to be displayed; if the resolution is greater than a preset resolution threshold, dividing the image to be displayed into M.N sub-images according to the dividing mode.
9. An image display device, the device comprising:
the acquisition module is used for acquiring attribute information of the target sub-image; wherein the attribute information of the target sub-image comprises image display position information and a sub-image identification of the target sub-image; and acquiring a target sub-image corresponding to the sub-image identification; the target sub-image is a sub-image in an image to be displayed, the image to be displayed is divided into M x N sub-images, and M and N are positive integers;
The display module is used for displaying the target sub-image according to the image display position information;
the main control equipment determines the image display position information of the target sub-image and an output node for processing the target sub-image according to the segmentation mode of the image to be displayed; the method specifically comprises the following steps: determining a display area of the target sub-image according to the segmentation mode; if the display area of the display unit and the display area of the target sub-image have an overlapping area, determining an output node corresponding to the display unit as an output node for processing the target sub-image; determining image display position information of the target sub-image according to the overlapping area;
the acquiring module is specifically configured to, when acquiring the target sub-image corresponding to the sub-image identifier: and sending the sub-image identification to an input node, acquiring a first coded bit stream aiming at the sub-image identification from the input node, and decoding the first coded bit stream to obtain a target sub-image corresponding to the sub-image identification.
10. An image display device, the device comprising:
The image display device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a segmentation mode of an image to be displayed, wherein the segmentation mode represents that the image to be displayed is segmented into M.N sub-images; wherein M is a positive integer, and N is a positive integer;
the determining module is used for determining the image display position information of the target sub-image and the output node for processing the target sub-image according to the dividing mode aiming at the target sub-image in the image to be displayed; the method specifically comprises the following steps: determining a display area of the target sub-image according to the segmentation mode; if the display area of the display unit and the display area of the target sub-image have an overlapping area, determining an output node corresponding to the display unit as an output node for processing the target sub-image; determining image display position information of the target sub-image according to the overlapping area;
the sending module is used for sending the attribute information of the target sub-image to the output node, wherein the attribute information comprises the image display position information and a sub-image identifier of the target sub-image, so that the output node displays the target sub-image corresponding to the sub-image identifier according to the image display position information;
The output node sends the sub-image identification to an input node, obtains a first coding bit stream aiming at the sub-image identification from the input node, and decodes the first coding bit stream to obtain a target sub-image corresponding to the sub-image identification.
11. An output node, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine-executable instructions to perform the steps of:
acquiring attribute information of a target sub-image; wherein the attribute information of the target sub-image comprises image display position information and a sub-image identification of the target sub-image;
acquiring a target sub-image corresponding to the sub-image identification; the target sub-image is a sub-image in an image to be displayed, the image to be displayed is divided into M x N sub-images, and M and N are positive integers;
displaying the target sub-image according to the image display position information;
the main control equipment determines the image display position information of the target sub-image and an output node for processing the target sub-image according to the segmentation mode of the image to be displayed; the method specifically comprises the following steps: determining a display area of the target sub-image according to the segmentation mode; if the display area of the display unit and the display area of the target sub-image have an overlapping area, determining an output node corresponding to the display unit as an output node for processing the target sub-image; determining image display position information of the target sub-image according to the overlapping area;
The acquiring the target sub-image corresponding to the sub-image identification comprises: and sending the sub-image identification to an input node, acquiring a first coded bit stream aiming at the sub-image identification from the input node, and decoding the first coded bit stream to obtain a target sub-image corresponding to the sub-image identification.
12. A master control apparatus, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine-executable instructions to perform the steps of:
obtaining a segmentation mode of an image to be displayed, wherein the segmentation mode represents that the image to be displayed is segmented into M x N sub-images; wherein M is a positive integer, and N is a positive integer;
determining image display position information of a target sub-image in the image to be displayed and an output node for processing the target sub-image according to the segmentation mode; the method specifically comprises the following steps: determining a display area of the target sub-image according to the segmentation mode; if the display area of the display unit and the display area of the target sub-image have an overlapping area, determining an output node corresponding to the display unit as an output node for processing the target sub-image; determining image display position information of the target sub-image according to the overlapping area;
Transmitting attribute information of the target sub-image to the output node, wherein the attribute information comprises the image display position information and a sub-image identifier of the target sub-image, so that the output node displays the target sub-image corresponding to the sub-image identifier according to the image display position information;
the output node sends the sub-image identification to an input node, obtains a first coding bit stream aiming at the sub-image identification from the input node, and decodes the first coding bit stream to obtain a target sub-image corresponding to the sub-image identification.
CN202110157083.5A 2020-03-12 2021-02-04 Image display method, device and equipment Active CN113395564B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020101709580 2020-03-12
CN202010170958 2020-03-12

Publications (2)

Publication Number Publication Date
CN113395564A CN113395564A (en) 2021-09-14
CN113395564B true CN113395564B (en) 2023-05-26

Family

ID=77616796

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110157083.5A Active CN113395564B (en) 2020-03-12 2021-02-04 Image display method, device and equipment

Country Status (1)

Country Link
CN (1) CN113395564B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114170940A (en) * 2021-12-02 2022-03-11 武汉华星光电技术有限公司 Display device, display system and distributed function system
CN115426518B (en) * 2022-08-09 2023-08-25 杭州海康威视数字技术股份有限公司 Display control system, image display method and LED display control system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102547205A (en) * 2011-12-21 2012-07-04 广东威创视讯科技股份有限公司 Method and system for displaying ultra-high resolution image
CN109218656B (en) * 2017-06-30 2021-03-26 杭州海康威视数字技术股份有限公司 Image display method, device and system
CN108040245A (en) * 2017-11-08 2018-05-15 深圳康得新智能显示科技有限公司 Methods of exhibiting, system and the device of 3-D view
CN109062525B (en) * 2018-07-11 2021-11-19 深圳市东微智能科技股份有限公司 Data processing method and device of tiled display system and computer equipment
CN109669654A (en) * 2018-12-22 2019-04-23 威创集团股份有限公司 A kind of combination display methods and device
CN109640026B (en) * 2018-12-26 2021-10-08 威创集团股份有限公司 High-resolution signal source spliced wall display method, device and equipment

Also Published As

Publication number Publication date
CN113395564A (en) 2021-09-14

Similar Documents

Publication Publication Date Title
CN113395564B (en) Image display method, device and equipment
US6559846B1 (en) System and process for viewing panoramic video
US20210227236A1 (en) Scalability of multi-directional video streaming
US11089283B2 (en) Generating time slice video
KR20160079357A (en) Method for sending video in region of interest from panoramic-video, server and device
US20200259880A1 (en) Data processing method and apparatus
US10924782B2 (en) Method of providing streaming service based on image segmentation and electronic device supporting the same
CN109218739B (en) Method, device and equipment for switching visual angle of video stream and computer storage medium
CN111818295B (en) Image acquisition method and device
KR100746005B1 (en) Apparatus and method for managing multipurpose video streaming
CN102550035B (en) For the method and apparatus for performing video conference and being encoded to video flowing
US20100186464A1 (en) Laundry refresher unit and laundry treating apparatus having the same
CN113259729B (en) Data switching method, server, system and storage medium
CN107734278B (en) Video playback method and related device
JP2007013697A (en) Image receiver and image receiving method
US11025880B2 (en) ROI-based VR content streaming server and method
CN114157903A (en) Redirection method, redirection device, redirection equipment, storage medium and program product
CN116264619A (en) Resource processing method, device, server, terminal, system and storage medium
KR102218187B1 (en) Apparatus and method for providing video contents
US20170048532A1 (en) Processing encoded bitstreams to improve memory utilization
US9883194B2 (en) Multiple bit rate video decoding
JP6375902B2 (en) Image transmission control device, image transmission control method, and image transmission control program
CN112738056B (en) Encoding and decoding method and system
WO2016088409A1 (en) Image transmission device, image transmission method, image transmission program, image transmission control device, image transmission control method, and image transmission control program
CN116980688A (en) Video processing method, apparatus, computer, readable storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant