CN113395564A - Image display method, device and equipment - Google Patents

Image display method, device and equipment Download PDF

Info

Publication number
CN113395564A
CN113395564A CN202110157083.5A CN202110157083A CN113395564A CN 113395564 A CN113395564 A CN 113395564A CN 202110157083 A CN202110157083 A CN 202110157083A CN 113395564 A CN113395564 A CN 113395564A
Authority
CN
China
Prior art keywords
image
sub
target sub
displayed
output node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110157083.5A
Other languages
Chinese (zh)
Other versions
CN113395564B (en
Inventor
雷日勇
黄玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Publication of CN113395564A publication Critical patent/CN113395564A/en
Application granted granted Critical
Publication of CN113395564B publication Critical patent/CN113395564B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1446Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

The application provides an image display method, an image display device and image display equipment, wherein the method comprises the following steps: acquiring attribute information of a target sub-image; the attribute information of the target sub-image comprises image display position information and a sub-image identifier of the target sub-image; acquiring a target sub-image corresponding to the sub-image identifier; the target sub-image is a sub-image in the image to be displayed, and the image to be displayed is divided into M × N sub-images; and displaying the target sub-image according to the image display position information. By the technical scheme, under the scene of high resolution/ultrahigh resolution, the network bandwidth can be saved, and the processing resource of the output node can be saved.

Description

Image display method, device and equipment
Technical Field
The present application relates to the field of video surveillance, and in particular, to a method, an apparatus, and a device for displaying an image.
Background
With the rapid development of video technology, the image resolution is higher and higher, and in order to display a high-resolution image, a tiled display system can be adopted to display the image. A tiled display system is a display system implemented by hardware and software, capable of displaying high-resolution/ultra-high-resolution images, and is generally formed by combining a plurality of display units. The tiled display system has been widely applied to various fields, such as broadcast television performance system, communication network management system, etc., as a modern video tool.
In order to realize the display of the high-resolution image, the high-resolution image needs to be sent to each display unit of the tiled display system, and the sub-images of the high-resolution image are displayed by the display units. For example, the display unit a displays the sub-image 1 and the sub-image 2 of the high resolution image on the display device, and the display unit b displays the sub-image 3 and the sub-image 4 of the high resolution image on the display device. Obviously, through the cooperation of the display unit a and the display unit b, a complete high-resolution image can be displayed on the display device.
In the above manner, each display unit displays only a portion of the sub-images, but the high resolution images themselves are transmitted to the display units, thereby wasting network bandwidth and wasting processing resources of the display units.
Disclosure of Invention
In view of the above, the present application provides an image display method, including:
acquiring attribute information of a target sub-image; the attribute information of the target sub-image comprises image display position information and a sub-image identifier of the target sub-image;
acquiring a target sub-image corresponding to the sub-image identifier; the target sub-image is a sub-image in an image to be displayed, the image to be displayed is divided into M × N sub-images, and M and N are positive integers;
and displaying the target sub-image according to the image display position information.
The application provides an image display method, which comprises the following steps:
acquiring a segmentation mode of an image to be displayed, wherein the segmentation mode represents that the image to be displayed is segmented into M × N sub-images; wherein M is a positive integer, and N is a positive integer;
aiming at a target sub-image in the image to be displayed, determining image display position information of the target sub-image and an output node for processing the target sub-image according to the segmentation mode;
and sending the attribute information of the target sub-image to the output node, wherein the attribute information comprises the image display position information and the sub-image identifier of the target sub-image, so that the output node displays the target sub-image corresponding to the sub-image identifier according to the image display position information.
The application provides an image display method, which comprises the following steps:
acquiring an image to be displayed and a segmentation mode of the image to be displayed;
dividing the image to be displayed into M x N sub-images according to the division mode;
sending the division mode to a main control device, so that the main control device determines image display position information of a target sub-image and an output node for processing the target sub-image according to the division mode, and sends attribute information of the target sub-image to the output node, wherein the attribute information comprises the image display position information and a sub-image identifier of the target sub-image;
and sending the coded bit stream aiming at the target sub-image to the output node, so that the output node obtains the target sub-image corresponding to the sub-image identifier according to the coded bit stream, and displaying the target sub-image according to the image display position information.
The present application provides an image display apparatus, the apparatus comprising:
the acquisition module is used for acquiring the attribute information of the target sub-image; the attribute information of the target sub-image comprises image display position information and a sub-image identifier of the target sub-image; acquiring a target sub-image corresponding to the sub-image identifier; the target sub-image is a sub-image in an image to be displayed, the image to be displayed is divided into M × N sub-images, and M and N are positive integers;
and the display module is used for displaying the target sub-image according to the image display position information.
The present application provides an image display apparatus, the apparatus comprising:
the image display device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a segmentation mode of an image to be displayed, and the segmentation mode represents that the image to be displayed is segmented into M × N sub-images; wherein M is a positive integer, and N is a positive integer;
the determining module is used for determining image display position information of a target sub-image and an output node for processing the target sub-image according to the segmentation mode aiming at the target sub-image in the image to be displayed;
and the sending module is used for sending the attribute information of the target sub-image to the output node, wherein the attribute information comprises the image display position information and the sub-image identifier of the target sub-image, so that the output node displays the target sub-image corresponding to the sub-image identifier according to the image display position information.
The present application provides an output node comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
acquiring attribute information of a target sub-image; the attribute information of the target sub-image comprises image display position information and a sub-image identifier of the target sub-image;
acquiring a target sub-image corresponding to the sub-image identifier; the target sub-image is a sub-image in an image to be displayed, the image to be displayed is divided into M × N sub-images, and M and N are positive integers;
and displaying the target sub-image according to the image display position information.
The application provides a master control device, includes: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
acquiring a segmentation mode of an image to be displayed, wherein the segmentation mode represents that the image to be displayed is segmented into M × N sub-images; wherein M is a positive integer, and N is a positive integer;
aiming at a target sub-image in the image to be displayed, determining image display position information of the target sub-image and an output node for processing the target sub-image according to the segmentation mode;
and sending the attribute information of the target sub-image to the output node, wherein the attribute information comprises the image display position information and the sub-image identifier of the target sub-image, so that the output node displays the target sub-image corresponding to the sub-image identifier according to the image display position information.
According to the technical scheme, the output node can obtain the target sub-image of the image to be displayed and displays the target sub-image of the image to be displayed, so that the network bandwidth can be saved and the processing resource of the output node can be saved in a high-resolution/ultrahigh-resolution scene. For example, the target sub-image of the image to be displayed is transmitted to the output node, rather than the image to be displayed (i.e., the high resolution image) itself, thereby conserving network bandwidth. The output node decodes the target sub-image of the image to be displayed, rather than decoding the image to be displayed itself, thereby saving processing resources (i.e., decoding resources) of the output node.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments of the present application or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings of the embodiments of the present application.
FIG. 1 is a flow chart of an image display method in one embodiment of the present application;
FIGS. 2A-2C are schematic diagrams of image segmentation in one embodiment of the present application;
FIGS. 2D-2F are schematic diagrams of window division in one embodiment of the present application;
FIG. 3 is a flow chart of an image display method in another embodiment of the present application;
FIG. 4 is a flow chart of an image display method in another embodiment of the present application;
FIG. 5A is a diagram illustrating an application scenario of a multi-track local source in one embodiment of the present application;
FIG. 5B is a flowchart of an image display method for a multi-track local source in one embodiment of the present application;
FIG. 6A is a schematic diagram of an application scenario of a multi-track network source in one embodiment of the present application;
FIG. 6B is a flowchart of an image display method for a multi-track network source in one embodiment of the present application;
fig. 7A and 7B are structural diagrams of an image display apparatus in an embodiment of the present application;
FIG. 8A is a block diagram of an output node in one embodiment of the present application;
fig. 8B is a block diagram of a master device in an embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein is meant to encompass any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in the embodiments of the present application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Depending on the context, moreover, the word "if" as used may be interpreted as "at … …" or "when … …" or "in response to a determination".
An embodiment of the present application provides an image display method, which may be applied to a master control device, and referring to fig. 1, is a schematic flow diagram of the image display method, and the method may include:
in step 101, a segmentation method of an image to be displayed is obtained, where the segmentation method may represent that the image to be displayed is segmented into M × N sub-images, where, for example, M may be a positive integer, and N may be a positive integer.
The M × N sub-images may refer to: the method comprises the steps of dividing an image to be displayed into M sub-images in the width direction of the image to be displayed, and dividing the image to be displayed into N sub-images in the height direction of the image to be displayed. Or, in the height direction of the image to be displayed, the image to be displayed is divided into M sub-images, and in the width direction of the image to be displayed, the image to be displayed is divided into N sub-images. For convenience of description, in the following embodiments, the image to be displayed is divided into M sub-images in the width direction of the image to be displayed, and the image to be displayed is divided into N sub-images in the height direction of the image to be displayed.
For example, the image to be displayed may be a high-resolution/ultra-high-resolution image, and when the image to be displayed is displayed, a sub-image of the image to be displayed may be displayed. Based on this, the image to be displayed may be divided into M × N sub-images, and how to divide the image to be displayed into M × N sub-images is expressed in a division manner, where M and N are both positive integers, and M and N may be the same or different.
For example, when the division mode indicates that the image to be displayed is divided into 3 × 3 sub-images, referring to fig. 2A, the image to be displayed may be divided into 9 sub-images, i.e., sub-image a 1-sub-image a 9.
For another example, when the division mode indicates that the image to be displayed is divided into 2 × 4 sub-images, referring to fig. 2B, the image to be displayed may be divided into 8 sub-images, i.e., sub-image a 1-sub-image a 8.
For another example, when the division mode indicates that the image to be displayed is divided into 4 × 2 sub-images, referring to fig. 2C, the image to be displayed may be divided into 8 sub-images, i.e., sub-image a 1-sub-image a 8.
Of course, the above are only a few examples of the division method, and the division method is not limited.
In one possible embodiment, the master control device may obtain the segmentation mode of the image to be displayed for the local source from the input node. For example, in an application scenario of a local source (for the introduction of the local source, refer to the following embodiments), the input node may obtain a segmentation mode of an image to be displayed, and segment the image to be displayed into M × N sub-images according to the segmentation mode, and on this basis, the master control device may send a query message to the input node, so that the input node sends the segmentation mode to the master control device after receiving the query message.
In another possible embodiment, the master device may obtain the segmentation mode of the image to be displayed for the network source from the remote device. For example, in an application scenario of a network source (for introduction of the network source, refer to the following embodiments), the remote device may obtain a segmentation mode of an image to be displayed, and encode the image to be displayed according to the segmentation mode to obtain an encoded bitstream. For example, if the division manner indicates that the image to be displayed is divided into M × N sub-images, the encoded bitstream of the image to be displayed may include contents corresponding to the M × N sub-images. On this basis, the master control device may send a query message to the remote device, so that the remote device sends the split mode to the master control device after receiving the query message.
And 102, aiming at a target sub-image in the image to be displayed, determining image display position information of the target sub-image and an output node for processing the target sub-image according to the segmentation mode. For example, the target sub-image may be any sub-image in the image to be displayed, that is, for each sub-image, the image display position information of the sub-image and the output node for processing the sub-image may be determined according to the division manner.
In one possible embodiment, the display area of the target sub-image may be determined according to the division manner. If the display area of the display unit and the display area of the target sub-image have an overlap area, the output node corresponding to the display unit may be determined as the output node for processing the target sub-image, and the image display position information of the target sub-image may be determined according to the overlap area. Further, after the overlap region is determined, the segmentation information of the target sub-image may be determined according to the overlap region.
For example, the image display window of the image to be displayed may be divided into M × N display regions according to the dividing manner, and as illustrated in fig. 2A, the dividing manner indicates that the image to be displayed is divided into 3 × 3 sub-images, and then the image display window of the image to be displayed may be divided into 3 × 3 display regions. Referring to fig. 2D, the main control device may obtain coordinate information of the image display window, and divide the image display window into 3 × 3 display regions, where the display regions may be in one-to-one correspondence with the sub-images, and the sizes of the sub-images are the same as the sizes of the display regions. For example, display area 1 in fig. 2D corresponds to sub-image a1, display area 2 corresponds to sub-image a2, and so on, and display area 9 corresponds to sub-image a9 of fig. 2A.
In summary, for each target sub-image, the display area of the target sub-image may be determined, for example, if the target sub-image is sub-image a1, the display area of sub-image a1 is display area 1, and so on.
Illustratively, a plurality of display units and a plurality of output nodes may be disposed, and the display units are connected to the output nodes. One output node may be connected to only one display unit, or may be connected to two or more display units, which is not limited thereto. For example, referring to fig. 2D, taking deployment of a display unit a, a display unit b, a display unit c, and a display unit D as an example, assuming that one output node connects two display units, the display unit a and the display unit b are connected to the output node 1, and the display unit c and the display unit D are connected to the output node 2; or the display unit a and the display unit c are connected with the output node 1, and the display unit b and the display unit d are connected with the output node 2; alternatively, the display unit b and the display unit c are connected to the output node 1, and the display unit a and the display unit d are connected to the output node 2, which is not limited to this connection manner.
Referring to fig. 2D, each display unit has its own corresponding display area, i.e., the display unit needs to display an image in the display area corresponding to the display unit. For example, the display area of the display unit a is the display area a, that is, the display unit a displays an image in the display area a. The display area of the display unit B is the display area B, that is, the display unit B displays an image in the display area B. The display area of the display unit C is the display area C, that is, the display unit C displays an image in the display area C. The display area of the display unit D is the display area D, that is, the display unit D displays an image in the display area D.
Since the main control device can know the display area of the target sub-image and the display area of the display unit, it can be determined whether there is an overlapping area between the display area of the display unit and the display area of the target sub-image. Referring to fig. 2E, there is an overlap region s1 between the display region a of the display unit a and the display region 1 of the sub-image a1, and thus, the output node corresponding to the display unit a can be determined as the output node for processing the sub-image a 1. Since there is an overlap region s2 between the display region C of the display unit C and the display region 1 of the sub-image a1, the output node corresponding to the display unit C can be determined as the output node for processing the sub-image a 1. The implementation of the other sub-picture is similar to sub-picture a 1.
Illustratively, since there is an overlap region s1 between the display region a of the display unit a and the display region 1 of the sub-image a1, image display position information of the sub-image a1, which may be position information of the overlap region s1, may be determined according to the overlap region s 1. For example, the upper left point coordinate (or the upper right point coordinate, or the lower left point coordinate, or the lower right point coordinate) of the overlap region s1, the width and height of the overlap region s 1. Alternatively, the upper left point coordinate, the upper right point coordinate, the lower left point coordinate, and the lower right point coordinate of the overlap region s 1. Of course, the above is only an example of the image display position information, and the present invention is not limited thereto.
Since there is an overlap region s2 between the display region C of the display unit C and the display region 1 of the sub-image a1, the image display position information of the sub-image a1, which may be the position information of the overlap region s2, may be determined from the overlap region s2, without limitation.
Illustratively, since there is an overlap region s1 between the display region a of the display unit a and the display region 1 of the sub-image a1, the division information of the sub-image a1 is determined according to the overlap region s1, and referring to fig. 2F, the division information of the sub-image a1 may be a K-point coordinate and a Q-point coordinate, which represents that the sub-image a1 is divided along a connection line between the K-point and the Q-point. Since there is an overlap region s2 between the display region C of the display unit C and the display region 1 of the sub-image a1, the division information of the sub-image a1 is determined according to the overlap region s2, and the division information of the sub-image a1 may be K point coordinates and Q point coordinates, as shown in fig. 2F.
In summary, for each target sub-image, image display position information of the target sub-image, an output node for processing the target sub-image, and segmentation information of the target sub-image may be obtained.
And 103, sending the attribute information of the target sub-image to an output node, wherein the attribute information comprises the image display position information of the target sub-image and the sub-image identifier of the target sub-image, so that the output node displays the target sub-image corresponding to the sub-image identifier according to the image display position information.
For example, referring to fig. 2F, since there is an overlap area s1 between the display area a of the display unit a and the display area 1 of the sub-image a1, the attribute information of the sub-image a1, which includes the image display position information of the sub-image a1 (i.e., the position information of the overlap area s 1) and the sub-image identification of the sub-image a1, may be transmitted to the output node corresponding to the display unit a. Thus, after receiving the attribute information, the output node may display the sub-image a1 corresponding to the sub-image identifier according to the position information of the overlap region s1, and the display process of the sub-image a1 may refer to the following embodiments, which are not described herein again.
The attribute information of the sub-image a1 may further include division information of the sub-image a1, for example, K point coordinates and Q point coordinates when the division information of the sub-image a1 is determined according to the overlap region s1, when the attribute information of the sub-image a1 is transmitted to the output node corresponding to the display unit a.
For example, referring to fig. 2F, since there is an overlap area s2 between the display area C of the display unit C and the display area 1 of the sub-image a1, the attribute information of the sub-image a1, which includes the image display position information of the sub-image a1 (i.e., the position information of the overlap area s 2) and the sub-image identification of the sub-image a1, may be transmitted to the output node corresponding to the display unit C. Thus, after receiving the attribute information, the output node may display the sub-image a1 corresponding to the sub-image identifier according to the position information of the overlap region s2, and the display process of the sub-image a1 may refer to the following embodiments, which are not described herein again.
The attribute information of the sub-image a1 may further include division information of the sub-image a1, for example, K point coordinates and Q point coordinates when the division information of the sub-image a1 is determined according to the overlap region s2, when the attribute information of the sub-image a1 is transmitted to the output node corresponding to the display unit c.
For example, for the sub-image a4, the display area 4 of the sub-image a4 is located in the display area C of the display unit C, and when the attribute information of the sub-image a4 is sent to the output node corresponding to the display unit C, the attribute information of the sub-image a4 does not include the partition information, that is, the sub-image a4 does not need to be divided, and the sub-image a4 is displayed as a whole.
According to the technical scheme, the output node can obtain the target sub-image of the image to be displayed and displays the target sub-image of the image to be displayed, so that the network bandwidth can be saved and the processing resource of the output node can be saved in a high-resolution/ultrahigh-resolution scene. For example, the target sub-image of the image to be displayed is transmitted to the output node, rather than the image to be displayed (i.e., the high resolution image) itself, thereby conserving network bandwidth. The output node decodes the target sub-image of the image to be displayed, rather than decoding the image to be displayed itself, thereby saving processing resources (i.e., decoding resources) of the output node.
An embodiment of the present application provides an image display method, which may be applied to an output node, and as shown in fig. 3, is a flowchart of the image display method, where the method may include:
step 301, acquiring attribute information of a target sub-image; the attribute information of the target sub-image may include, but is not limited to, image display position information and a sub-image identifier of the target sub-image.
Referring to the above embodiment, the master control device may send the attribute information of the target sub-image to the output node, and therefore, the output node may obtain the attribute information of the target sub-image from the master control device.
For example, assuming that the output node corresponding to the display unit a is the output node 1, referring to the above embodiment, the master device may transmit the attribute information of the sub-image a1 to the output node 1, and the output node 1 may acquire the attribute information of the sub-image a1, which includes the image display position information of the sub-image a1 (i.e., the position information of the overlap area s 1) and the sub-image identifier of the sub-image a 1. Assuming that the output node corresponding to the display unit c is the output node 2, the master device may transmit the attribute information of the sub-image a1 to the output node 2, and the output node 2 may acquire the attribute information of the sub-image a1, which includes the image display position information of the sub-image a1 (i.e., the position information of the overlap area s 2) and the sub-image identifier of the sub-image a 1.
Step 302, acquiring a target sub-image corresponding to the sub-image identifier; the target sub-image is a sub-image in the image to be displayed, the image to be displayed is divided into M × N sub-images, and M and N are positive integers.
For example, since the attribute information of the sub-image a1 includes the sub-image identification of the sub-image a1, the output node 1/output node 2 may acquire the target sub-image corresponding to the sub-image identification, i.e., the sub-image a 1.
In a possible implementation manner, the output node may obtain a first coded bitstream for the sub-image identifier from the input node, and decode the first coded bitstream to obtain a target sub-image corresponding to the sub-image identifier, that is, a target sub-image of the image to be displayed. For example, the output node sends the sub-image identifier to the input node, so that the input node sends the first encoded bit stream for the sub-image identifier to the output node, and thus, the output node can decode the first encoded bit stream to obtain the target sub-image.
For example, referring to FIG. 2A, the input node divides the image to be displayed into 9 sub-images, sub-image a 1-sub-image a 9. The input node may encode sub-picture a1 resulting in first coded bitstream 1, sub-picture a2 resulting in first coded bitstream 2, and so on.
After the output node 1 obtains the attribute information of the sub-image a1, the sub-image id of the sub-image a1 is sent to the input node because the attribute information includes the sub-image id of the sub-image a 1. The input node may send the first encoded bitstream 1 corresponding to the sub-image identification to the output node 1. After receiving the first coded bit stream 1, the output node 1 decodes the first coded bit stream 1 to obtain the sub-image a 1.
Similarly, after the output node 2 obtains the attribute information of the sub-image a1, the attribute information includes the sub-image identifier of the sub-image a1, and therefore the sub-image identifier of the sub-image a1 is sent to the input node. The input node may send the first encoded bitstream 1 corresponding to the sub-image identification to the output node 2. After receiving the first coded bit stream 1, the output node 2 decodes the first coded bit stream 1 to obtain the sub-image a 1.
In another possible embodiment, the output node obtains a second encoded bitstream for the image to be displayed from the remote device, and decodes the content in the second encoded bitstream corresponding to the sub-image identifier (i.e., decodes part of the content in the second encoded bitstream, instead of decoding all the content in the second encoded bitstream), so as to obtain a target sub-image corresponding to the sub-image identifier, i.e., a target sub-image of the image to be displayed.
For example, referring to FIG. 2A, assuming that the image to be displayed needs to be partitioned into 9 sub-images, sub-images a 1-sub-image a9, the remote device may encode the image to be displayed (note that here the entire image to be displayed is encoded, rather than encoding each sub-image separately), resulting in a second encoded bitstream. In the second encoded bitstream, content corresponding to sub-image a1, sub-image a2, and so on, sub-image a9 may be included.
Output node 1 may send a request message to a remote device, which may send a second coded bit stream to output node 1 upon receiving the request message. After the output node 1 obtains the second encoded bitstream, since the attribute information of the sub-image a1 includes the sub-image id of the sub-image a1, the content corresponding to the sub-image id in the second encoded bitstream can be decoded to obtain the sub-image a 1.
Similarly, the output node 2 may send a request message to the remote device, and the remote device may send the second encoded bitstream to the output node 2 after receiving the request message. After the output node 2 obtains the second encoded bitstream, since the attribute information of the sub-image a1 includes the sub-image id of the sub-image a1, the content corresponding to the sub-image id in the second encoded bitstream can be decoded to obtain the sub-image a 2.
And 303, displaying the target sub-image according to the image display position information.
In a possible implementation, the attribute information of the target sub-image may further include segmentation information of the target sub-image. Based on this, the output node may divide the target sub-image into at least two data blocks according to the division information of the target sub-image, and select a target data block to be displayed from the at least two data blocks. Then, the output node displays the target block of data according to the image display position information.
For example, in practical applications, if the target sub-image does not span the display unit, that is, the display area of the target sub-image is located in the display area of only one display unit, such as sub-image a4, the attribute information of the target sub-image may not include the partition information of the target sub-image. Based on this, the target sub-image does not need to be divided into at least two data blocks, but the entirety of the target sub-image is displayed directly in accordance with the image display position information.
Referring to the above-described embodiment, referring to fig. 2F, the output node 1 may acquire the attribute information of the sub-image a1, the attribute information including the image display position information of the sub-image a1 (i.e., the position information of the overlap area s 1), the sub-image identification of the sub-image a1, and the division information of the sub-image a1 (e.g., the K point coordinates and the Q point coordinates). Based on the sub-image identification of the sub-image a1, the output node 1 acquires the sub-image a 1. Based on the division information of the sub-image a1, the output node 1 divides the sub-image a1 into two data blocks, i.e., a data block 1 corresponding to the overlap region s1 and a data block 2 corresponding to the overlap region s2, and selects the data block 1 corresponding to the image display position information (i.e., the position information of the overlap region s 1), i.e., the data block 1, from the data blocks 1 and 2 as a target data block. Then, the output node 1 displays the data block 1 according to the position information of the overlap area s 1.
Similarly, the output node 2 may acquire the attribute information of the sub-image a1, which includes the image display position information of the sub-image a1 (i.e., the position information of the overlap region s 2), the sub-image identification of the sub-image a1, and the division information of the sub-image a1 (e.g., the K point coordinate and the Q point coordinate). Based on the sub-image identification of the sub-image a1, the output node 2 acquires the sub-image a 1. Based on the division information of the sub-image a1, the output node 2 divides the sub-image a1 into a data block 1 corresponding to the overlap region s1 and a data block 2 corresponding to the overlap region s2, and selects a data block 2 corresponding to the image display position information (i.e., the position information of the overlap region s 2) from the data blocks 1 and 2. The output node 2 displays the data block 2 according to the position information of the overlap area s 2.
To this end, the output node 1 displays the data block 1 in the sub-image a1 in the overlap region s1, and the output node 2 successfully displays the data block 2 in the sub-image a1 in the overlap region s2, that is, the display of the sub-image a1 is completed.
In the above embodiment, the display process of the sub-image a1 is described, and the display processes of the sub-image a2, the sub-image a3, the sub-image a5 and the sub-image a8 are similar to the display process of the sub-image a1, and are not described again here.
Referring to fig. 2D, the attribute information of the sub-image a4, including the image display position information of the sub-image a4, the sub-image id of the sub-image a4, and the division information of the sub-image a4, is transmitted to the output node 2 corresponding to the display unit C, and the division information of the sub-image a4 is empty because the sub-image a4 is located only in the display area C. The output node 2 acquires the sub-image a4 corresponding to the sub-image id, and displays the sub-image a4 directly based on the image display position information without dividing the sub-image a 4. The display process of the sub-image a6, the sub-image a7, and the sub-image a9 is similar to the display process of the sub-image a4, and thus will not be described herein again.
In one possible embodiment, displaying the target sub-image according to the image display position information may include, but is not limited to: the output node determines an image display window of the target subimage according to the image display position information; and displaying the target sub-image in the image display window based on the time stamp of the target sub-image.
For example, the image display position information (taking the position information of the overlap region s1 as an example) including the upper left point coordinate of the overlap region s1, the width and height of the overlap region s1, based on which the output node 1 can determine the image display window of the sub-image a 1. Then, the output node 1 displays the sub-image a1 in the image display window, and when the sub-image a1 is displayed, it is also possible to determine the time stamp of the sub-image a1 indicating at which timing the sub-image a1 is displayed, and therefore, the output node 1 displays the sub-image a1 at that timing.
According to the technical scheme, the output node can obtain the target sub-image of the image to be displayed and displays the target sub-image of the image to be displayed, so that the network bandwidth can be saved and the processing resource of the output node can be saved in a high-resolution/ultrahigh-resolution scene. For example, the target sub-image of the image to be displayed is transmitted to the output node, rather than the image to be displayed (i.e., the high resolution image) itself, thereby conserving network bandwidth. The output node decodes the target sub-image of the image to be displayed, rather than decoding the image to be displayed itself, thereby saving processing resources (i.e., decoding resources) of the output node.
An embodiment of the present application provides an image display method, which may be applied to an input node, and as shown in fig. 4, is a flowchart of the image display method, where the method may include:
step 401, obtaining an image to be displayed and a segmentation mode of the image to be displayed.
Step 402, the image to be displayed is divided into M × N sub-images according to the division manner.
For example, the image to be displayed may be a high-resolution/ultrahigh-resolution image, the input node may divide the image to be displayed into M × N sub-images, and how to divide the image to be displayed into M × N sub-images is represented in a dividing manner, so that the input node may divide the image to be displayed into M × N sub-images according to the dividing manner, M and N are positive integers, and M and N may be the same or different.
In a possible embodiment, the input node may determine a resolution of the image to be displayed, and if the resolution is greater than a preset resolution threshold (the preset resolution threshold may be configured according to experience, and if the resolution is greater than the preset resolution threshold, the image to be displayed is an image with high resolution/ultrahigh resolution), the input node may segment the image to be displayed into M × N sub-images according to the segmentation manner.
Step 403, sending the partition mode to the main control device, so that the main control device determines image display position information of the target sub-image and an output node for processing the target sub-image according to the partition mode, and sends the attribute information of the target sub-image to the output node; for example, the attribute information may include image display position information of the target sub-image and a sub-image identification of the target sub-image.
Step 404, sending the coded bit stream for the target sub-image to the output node, so that the output node obtains the target sub-image corresponding to the sub-image identifier according to the coded bit stream, and displays the target sub-image according to the image display position information.
For example, after the input node sends the splitting manner to the master control device, the processing procedure of the master control device may refer to the foregoing embodiment, and is not described herein again. For the processing procedure of the output node after the input node sends the coded bit stream to the output node, refer to the above embodiments, which are not described herein again.
The above technical solution of the embodiment of the present application is described below with reference to specific application scenarios. Before the technical solutions of the present application are introduced, the following concepts related to the technical solutions of the present application are introduced:
local sources: the input node is used as a local source and can send the image to be displayed to the input node, and the input node encodes the image to be displayed to obtain an encoded bit stream. And the coded bit stream is transmitted to an output node through a network, and the output node decodes the coded bit stream to obtain an image to be displayed. And the output node performs splicing display and other processing on the image to be displayed. The above process may be referred to as a process of a local source.
Multi-track local sources: the input node is used as a local source, and may send an image to be displayed to the input node, and if the resolution of the image to be displayed is greater than a certain threshold, the input node may segment the image to be displayed, for example, segment the image to be displayed into M × N sub-images, and encode each sub-image independently, so as to obtain an encoded bit stream of each sub-image. And the output node acquires one or more paths of coded bit streams from all the coded bit streams as required, decodes the acquired coded bit streams to obtain sub-images of the image to be displayed, and performs splicing display and other processing on the sub-images. The above process may be referred to as a process of a multi-track local source.
A network source: a remote device (e.g., IPC (IP Camera), DVR (Network Video Recorder), etc.) serves as a Network source, and outputs a coded bitstream. And the output node acquires the coded bit stream from the remote equipment, decodes the coded bit stream to obtain an image to be displayed, and performs splicing display and other processing on the image to be displayed. The above process may be referred to as a process of the network source.
Multi-track network source: the remote device serves as a network source and outputs an encoded bit stream, the encoded bit stream comprises a plurality of subcode streams of the subimages (the subcode stream of each subimage is a track), the subcode stream of each subimage can be decoded independently, but the subcode stream of the subimage cannot be acquired independently by an output node. The output node acquires the coded bit stream (namely the subcode streams of a plurality of subimages) from the remote equipment, and can decode one or more subcode streams in the coded bit stream as required to obtain the subimages of the image to be displayed, and performs splicing display and other processing on the subimages. The above process may be referred to as a process of a multi-track network source.
Decoding and splicing: when the display window of the image to be displayed spans a plurality of output nodes, the output nodes decode the coded bit stream respectively and ensure that a complete and low-delay picture is displayed at the display end.
Application scenario 1: for an application scenario of a multi-track local source, referring to fig. 5A, a network structure diagram of the application scenario is shown, a distributed system may include a main control device, a network switch, at least one input node, and a plurality of output nodes, where the number of the input nodes may be at least one, and the number of the output nodes may be multiple. Each output node can be connected with at least one display unit, and the figure takes the connection of 2 display units as an example.
The master control device is responsible for managing all input nodes and all output nodes, and can add/delete input nodes in the distributed system and can add/delete output nodes in the distributed system. The main control device is also responsible for state management of the input nodes, state management of the output nodes, layer information management, communication interaction with user portals such as clients/web and the like, and the functions of the main control device are not limited.
The network switch is used as network equipment for independent configuration management and provides a network communication foundation for the main control equipment, the input node and the output node. For example, the communication process between the master control device and the input node may be transferred through a network switch; the communication process between the main control equipment and the output node can be transferred through a network switch; the communication process between the input node and the output node can be relayed through the network switch.
The input node is responsible for encoding the received image to be displayed (namely the image to be displayed from the signal source), and when the resolution of the image to be displayed is larger than a threshold value, the image to be displayed can be segmented.
The output node is responsible for decoding the coded bit stream and performing processing such as splicing display, and after the output node decodes the coded bit stream, an image to be displayed can be obtained and output to a display unit of a splicing display system, so that the display unit displays the image to be displayed.
In the above application scenario, referring to fig. 5B, a flowchart of the image display method is shown.
Step 511, the input node obtains the image to be displayed and the segmentation mode of the image to be displayed.
For example, the image to be displayed may be input to the input node, and the image to be displayed may be a high/ultra-high resolution image, such as a 4K resolution image or a higher resolution image.
For example, a local 4K ultra high definition signal may be connected to the input node, and on this basis, the image to be displayed with the resolution of 4K may be input to the input node, so that the input node obtains the image to be displayed.
For another example, when the input node interfaces with a super-high resolution signal (e.g., a 32K signal), the super-high resolution signal may be divided into a plurality of small ultra-high definition signals (e.g., a 4K ultra-high definition signal), and the divided 4K ultra-high definition signal is connected to the input node, so that the image to be displayed with the 4K resolution may be input to the input node, so that the input node obtains the image to be displayed. For example, the input nodes adopt distributed management, and an unlimited number of input nodes can be supported, such as 8 input nodes can be deployed. When 32K super-resolution signals need to be accessed, the 32K super-resolution signals can be divided into 8 4K super-definition signals, the 1 st 4K super-definition signal is input to the input node 1, the 2 nd 4K super-definition signal is input to the input node 2, and the like, so that each input node can obtain images to be displayed with 4K resolution.
Of course, the above is only an example, and the method is not limited thereto, as long as the input node can obtain the image to be displayed, and the resolution of the image to be displayed may be 4K, may be greater than 4K, and may also be less than 4K.
For convenience of description, in the application scenario, a processing procedure of one input node for an image to be displayed is taken as an example, and processing procedures of other input nodes for the image to be displayed are similar, and are not repeated in the following.
For example, the input node may obtain a segmentation mode of the image to be displayed, for example, a segmentation mode may be configured in advance at the input node, where the segmentation mode indicates that the image to be displayed is segmented into M × N sub-images, and on this basis, the input node determines the segmentation mode configured in advance as the segmentation mode of the image to be displayed.
In step 512, the input node divides the image to be displayed into M × N sub-images according to the division manner.
Illustratively, the division mode means that the image to be displayed is divided into M × N sub-images, M and N are positive integers, and M and N may be the same or different. Based on this, the input node may divide the image to be displayed into M × N sub-images based on the division manner. For example, the image to be displayed is divided into 3 × 3 sub-images according to the division manner, as shown in fig. 2A, or the image to be displayed is divided into 2 × 4 sub-images according to the division manner, as shown in fig. 2B, or the image to be displayed is divided into 4 × 2 sub-images according to the division manner, as shown in fig. 2C. Of course, the above are only a few examples and are not limiting.
In a possible implementation manner, the input node may further determine a resolution of the image to be displayed, and if the resolution is greater than a preset resolution threshold (e.g., 3K, 3.5K, etc.), the input node may divide the image to be displayed into M × N sub-images according to the division manner. If the resolution is not greater than the preset resolution threshold, the input node does not need to divide the image to be displayed into M × N sub-images. For convenience of description, the example that the input node divides the image to be displayed into M × N sub-images is described later.
Step 513, the input node sends the splitting manner to the master control device.
For example, the master device may send a query message to the input node, so that the input node sends the splitting manner to the master device after receiving the query message.
In step 514, the master device receives the splitting manner and stores the splitting manner.
For example, the dividing manner of the image to be displayed may also be referred to as a multi-track layout manner of the image to be displayed.
Step 515, the main control device divides the image display window of the image to be displayed into M × N display areas according to the dividing manner, and determines the display area of each sub-image, where the display areas and the sub-images may be in one-to-one correspondence. For example, display area 1 in fig. 2D corresponds to sub-image a1, display area 2 corresponds to sub-image a2, and so on, and display area 9 corresponds to sub-image a9 of fig. 2A.
For example, after the master device receives the display command for the image to be displayed, the master device may divide the image display window of the image to be displayed into M × N display regions according to the dividing manner.
For example, when the main control device divides the image display window of the image to be displayed into M × N display regions according to the division manner, the M × N display regions may be display regions with equal proportions.
In step 516, for a target sub-image (i.e., any sub-image) of the image to be displayed, if there is an overlap area between the display area of the display unit and the display area of the target sub-image, the main control device determines an output node corresponding to the display unit as an output node for processing the target sub-image, and determines image display position information of the target sub-image and segmentation information of the target sub-image according to the overlap area.
Step 517, the main control device sends the attribute information of the target sub-image to the output node (i.e. the output node that processes the target sub-image), where the attribute information includes image display position information of the target sub-image, sub-image identifier of the target sub-image, and partition information of the target sub-image.
Step 518, the output node obtains attribute information of the target sub-image, where the attribute information includes image display position information of the target sub-image, sub-image identification of the target sub-image, and segmentation information of the target sub-image.
The output node retrieves the first coded bit stream identified for the sub-image from the input node, step 519.
For example, referring to fig. 2A, after the input node divides the image to be displayed into 9 sub-images, the sub-image a1 is encoded to obtain a first encoded bitstream 1, the sub-image a2 is encoded to obtain a first encoded bitstream 2, and so on. The output node 1 acquires the attribute information of the sub-image 1 including the sub-image id of the sub-image a1, and transmits the sub-image id of the sub-image a1 to the input node. Based on this, the input node sends the first encoded bitstream 1 corresponding to the sub-image identification of sub-image a1 to output node 1.
Step 520, the output node decodes the first encoded bitstream to obtain a target sub-image of the image to be displayed.
And step 521, the output node displays the target sub-image according to the image display position information of the target sub-window based on the timestamp of the target sub-image. For example, determining the display area of the target sub-image according to the image display position information; and displaying the target sub-image in the display area based on the time stamp of the target sub-image.
For example, the input node may add a timestamp in the first encoded bitstream, and the output node may determine the display time of the target sub-image according to the timestamp, and then perform synchronous splicing at the display time.
Application scenario 2: for an application scenario of a multi-track network source, referring to fig. 6A, a network structure diagram of the application scenario is shown, a distributed system may include a master device, a network switch, a remote device, and a plurality of output nodes, where the number of the output nodes may be multiple. The remote device is used as a network source and can be IPC, DVR and the like. In the application scenario, decoding splicing display of the ultra-high-definition camera is supported, namely the remote equipment can be the ultra-high-definition camera, namely the image acquired by the ultra-high-definition camera is displayed.
In the application scenario, referring to fig. 6B, a flowchart of the image display method is shown.
In step 611, the main control device determines the segmentation mode of the image to be displayed.
For example, the remote device may obtain a segmentation mode of the image to be displayed, and encode the image to be displayed according to the segmentation mode to obtain an encoded bitstream. For example, if the division manner indicates that the image to be displayed is divided into M × N sub-images, the encoded bitstream of the image to be displayed may include contents corresponding to the M × N sub-images. On this basis, the master device may obtain the segmentation mode for the image to be displayed from the remote device. For example, the master device may send a query message to the remote device, so that the remote device sends the split mode to the master device after receiving the query message.
Step 612, the main control device divides the image display window of the image to be displayed into M × N display areas according to the dividing manner, and determines the display area of each sub-image, where the display areas and the sub-images may be in one-to-one correspondence. For example, after receiving the display command for the image to be displayed, the master device may divide the image display window of the image to be displayed into M × N display regions according to the dividing manner.
Step 613, for a target sub-image (i.e. any sub-image) of the image to be displayed, if there is an overlap area between the display area of the display unit and the display area of the target sub-image, the main control device determines an output node corresponding to the display unit as an output node for processing the target sub-image, and determines image display position information of the target sub-image and segmentation information of the target sub-image according to the overlap area.
In step 614, the main control device sends the attribute information of the target sub-image to the output node (i.e. the output node that processes the target sub-image), where the attribute information includes image display position information of the target sub-image, sub-image identifier of the target sub-image, and partition information of the target sub-image.
Step 615, the output node obtains the attribute information of the target sub-image, wherein the attribute information includes the image display position information of the target sub-image, the sub-image identification of the target sub-image and the segmentation information of the target sub-image.
Step 616, the output node acquires the second coded bit stream for the image to be displayed, and decodes the content corresponding to the sub-image identifier in the second coded bit stream to obtain the target sub-image of the image to be displayed.
For example, after receiving the attribute information of the target sub-image, the output node may obtain a second encoded bitstream for the image to be displayed from a network source (e.g., a remote device), and unlike the implementation process of the multi-track local source, when the output node obtains the second encoded bitstream from the network source, the second encoded bitstream is a bitstream for the image to be displayed and is not a bitstream for the sub-image. For example, assuming that the image to be displayed is divided into 9 sub-images, sub-image a 1-sub-image a9, respectively, the output node acquires a second encoded bitstream for the image to be displayed, the second encoded bitstream comprising the content corresponding to sub-image a 1-sub-image a9, respectively.
For example, assuming that the output node 1 obtains the attribute information of the sub-image 1, after obtaining the second encoded bitstream, the output node 1 may decode the content corresponding to the sub-image identifier (known from the attribute information) of the sub-image a1 in the second encoded bitstream, so as to obtain the sub-image a1 of the image to be displayed.
Step 617, the output node displays the target sub-image according to the image display position information of the target sub-window based on the timestamp of the target sub-image. For example, determining the display area of the target sub-image according to the image display position information; and displaying the target sub-image in the display area based on the time stamp of the target sub-image.
In the foregoing technical solutions, the target sub-image may be displayed in a display window of the display device, for example, the output node sends the target sub-image and/or the data block of the target sub-image to the display unit, so that the display unit displays the target sub-image in the display window of the display device, which is not limited to the type of the display device.
According to the technical scheme, the output node can obtain the sub-image of the image to be displayed and displays the sub-image of the image to be displayed, so that the network bandwidth can be saved, and the processing resource of the output node can be saved. For example, a sub-image of the image to be displayed is transmitted to the output node, rather than the image to be displayed (i.e., the high resolution image) itself, thereby enabling network bandwidth savings. For another example, the output node decodes the sub-image of the image to be displayed, rather than decoding the image to be displayed itself, thereby saving processing resources of the output node. The main control device may determine a target sub-window for displaying the sub-image, and send the attribute information of the target sub-window to the output node, so that the output node displays the sub-image according to the attribute information of the target sub-window, thereby displaying the sub-image at a correct position. By splicing the signal sources, a complete ultrahigh-definition signal picture can be displayed on a television wall. The decoding resource configuration is optimized in the mode, the granularity of the decoding resource distribution is reduced by the multi-track decoding scheme, and the decoding resource is distributed only by considering the cross output nodes of the independent sub-images, so that the decoding resource is utilized to the maximum. The method optimizes the network bandwidth performance, and reduces the risk of bandwidth bottleneck of a single network port by dividing the single-path code stream of the ultra-high definition local source into multiple paths of code stream transmission.
In the above technical solutions, the execution sequence is only an example provided for convenience of description, and in practical applications, the execution sequence between the steps may also be changed, and the execution sequence is not limited. Moreover, in other embodiments, the steps of the respective methods do not have to be performed in the order shown and described herein, and the methods may include more or less steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps for description in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
Based on the same application concept as the method, an image display apparatus is also provided in the embodiment of the present application, as shown in fig. 7A, which is a structural diagram of the image display apparatus, and the apparatus includes:
an obtaining module 711, configured to obtain attribute information of the target sub-image; the attribute information of the target sub-image comprises image display position information and a sub-image identifier of the target sub-image; acquiring a target sub-image corresponding to the sub-image identifier; the target sub-image is a sub-image in an image to be displayed, the image to be displayed is divided into M × N sub-images, and M and N are positive integers;
and a display module 712, configured to display the target sub-image according to the image display position information.
The obtaining module 711 is specifically configured to, when obtaining the target sub-image corresponding to the sub-image identifier: acquiring a first coding bit stream aiming at the sub-image identifier from an input node, and decoding the first coding bit stream to obtain a target sub-image corresponding to the sub-image identifier; or, a second coded bit stream for the image to be displayed is acquired from the remote device, and the content corresponding to the sub-image identifier in the second coded bit stream is decoded to obtain a target sub-image corresponding to the sub-image identifier.
The attribute information of the target sub-image further comprises segmentation information of the target sub-image; the display module 712 is specifically configured to, when displaying the target sub-image according to the image display position information:
dividing the target sub-image into at least two data blocks according to the dividing information of the target sub-image, and selecting a target data block to be displayed from the at least two data blocks;
and displaying the target data block according to the image display position information.
The display module 712 is specifically configured to, when displaying the target sub-image according to the image display position information: determining an image display window of the target sub-image according to the image display position information;
and displaying the target sub-image in the image display window based on the time stamp of the target sub-image.
Based on the same application concept as the method, an image display apparatus is also provided in the embodiment of the present application, as shown in fig. 7B, which is a structural diagram of the image display apparatus, and the apparatus includes:
an obtaining module 721, configured to obtain a segmentation manner of an image to be displayed, where the segmentation manner represents that the image to be displayed is segmented into M × N sub-images; wherein M is a positive integer, and N is a positive integer; the determining module 722 is configured to determine, according to the segmentation mode, image display position information of a target sub-image in an image to be displayed and an output node for processing the target sub-image; a sending module 723, configured to send attribute information of the target sub-image to the output node, where the attribute information includes the image display position information and a sub-image identifier of the target sub-image, so that the output node displays the target sub-image corresponding to the sub-image identifier according to the image display position information.
The obtaining module 721 is specifically configured to, when obtaining the segmentation mode of the image to be displayed:
acquiring a segmentation mode of an image to be displayed aiming at a local source from an input node; alternatively, the first and second electrodes may be,
and acquiring a segmentation mode of the image to be displayed aiming at the network source from the remote equipment.
The determining module 722, when determining the image display position information of the target sub-image and the output node for processing the target sub-image according to the dividing manner, is specifically configured to:
determining a display area of the target sub-image according to the segmentation mode;
if the display area of the display unit and the display area of the target sub-image have an overlapping area, determining an output node corresponding to the display unit as an output node for processing the target sub-image;
and determining the image display position information of the target sub-image according to the overlapping area.
The determining module 722 is further configured to: determining segmentation information of the target sub-image according to the overlapping area; wherein the attribute information of the target sub-image further includes segmentation information of the target sub-image.
Based on the same application concept as the method, an output node is further provided in the embodiment of the present application, and from a hardware level, a schematic diagram of a hardware architecture of the output node provided in the embodiment of the present application may be as shown in fig. 8A. The output node may include: a processor 811 and a machine-readable storage medium 812, the machine-readable storage medium 812 storing machine-executable instructions executable by the processor 811; the processor 811 is configured to execute machine executable instructions to implement the methods disclosed in the above examples of the present application. For example, the processor 811 is configured to execute machine-executable instructions to perform the following steps:
acquiring attribute information of a target sub-image; the attribute information of the target sub-image comprises image display position information and a sub-image identifier of the target sub-image;
acquiring a target sub-image corresponding to the sub-image identifier; the target sub-image is a sub-image in an image to be displayed, the image to be displayed is divided into M × N sub-images, and M and N are positive integers;
and displaying the target sub-image according to the image display position information.
Based on the same application concept as the method, embodiments of the present application further provide a machine-readable storage medium, where several computer instructions are stored on the machine-readable storage medium, and when the computer instructions are executed by a processor, the method disclosed in the above example of the present application can be implemented.
For example, the computer instructions, when executed by a processor, enable the following steps:
acquiring attribute information of a target sub-image; the attribute information of the target sub-image comprises image display position information and a sub-image identifier of the target sub-image;
acquiring a target sub-image corresponding to the sub-image identifier; the target sub-image is a sub-image in an image to be displayed, the image to be displayed is divided into M × N sub-images, and M and N are positive integers;
and displaying the target sub-image according to the image display position information.
Based on the same application concept as the method described above, a main control device is also provided in the embodiment of the present application, and from a hardware level, a schematic diagram of a hardware architecture of the main control device provided in the embodiment of the present application may be as shown in fig. 8B. The master device may include: a processor 821 and a machine-readable storage medium 822, the machine-readable storage medium 822 storing machine-executable instructions executable by the processor 821; the processor 821 is configured to execute machine-executable instructions to implement the methods disclosed in the above examples of the present application. For example, the processor 821 is used to execute machine-executable instructions to implement the following steps:
acquiring a segmentation mode of an image to be displayed, wherein the segmentation mode represents that the image to be displayed is segmented into M × N sub-images; wherein M is a positive integer, and N is a positive integer;
aiming at a target sub-image in the image to be displayed, determining image display position information of the target sub-image and an output node for processing the target sub-image according to the segmentation mode;
and sending the attribute information of the target sub-image to the output node, wherein the attribute information comprises the image display position information and the sub-image identifier of the target sub-image, so that the output node displays the target sub-image corresponding to the sub-image identifier according to the image display position information.
Based on the same application concept as the method, embodiments of the present application further provide a machine-readable storage medium, where several computer instructions are stored on the machine-readable storage medium, and when the computer instructions are executed by a processor, the method disclosed in the above example of the present application can be implemented.
For example, the computer instructions, when executed by a processor, enable the following steps:
acquiring a segmentation mode of an image to be displayed, wherein the segmentation mode represents that the image to be displayed is segmented into M × N sub-images; wherein M is a positive integer, and N is a positive integer;
aiming at a target sub-image in the image to be displayed, determining image display position information of the target sub-image and an output node for processing the target sub-image according to the segmentation mode;
and sending the attribute information of the target sub-image to the output node, wherein the attribute information comprises the image display position information and the sub-image identifier of the target sub-image, so that the output node displays the target sub-image corresponding to the sub-image identifier according to the image display position information.
The machine-readable storage medium may be, for example, any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (14)

1. An image display method, characterized in that the method comprises:
acquiring attribute information of a target sub-image; the attribute information of the target sub-image comprises image display position information and a sub-image identifier of the target sub-image;
acquiring a target sub-image corresponding to the sub-image identifier; the target sub-image is a sub-image in an image to be displayed, the image to be displayed is divided into M × N sub-images, and M and N are positive integers;
and displaying the target sub-image according to the image display position information.
2. The method of claim 1,
the acquiring of the target sub-image corresponding to the sub-image identifier includes:
acquiring a first coding bit stream aiming at the sub-image identifier from an input node, and decoding the first coding bit stream to obtain a target sub-image corresponding to the sub-image identifier; alternatively, the first and second electrodes may be,
and acquiring a second coded bit stream aiming at the image to be displayed from the remote equipment, and decoding the content corresponding to the sub-image identifier in the second coded bit stream to obtain a target sub-image corresponding to the sub-image identifier.
3. The method of claim 1,
the attribute information of the target sub-image further comprises segmentation information of the target sub-image;
the displaying the target sub-image according to the image display position information includes:
dividing the target sub-image into at least two data blocks according to the dividing information of the target sub-image, and selecting a target data block to be displayed from the at least two data blocks;
and displaying the target data block according to the image display position information.
4. The method of claim 1,
the displaying the target sub-image according to the image display position information includes:
determining an image display window of the target sub-image according to the image display position information;
and displaying the target sub-image in the image display window based on the time stamp of the target sub-image.
5. An image display method, characterized in that the method comprises:
acquiring a segmentation mode of an image to be displayed, wherein the segmentation mode represents that the image to be displayed is segmented into M × N sub-images; wherein M is a positive integer, and N is a positive integer;
aiming at a target sub-image in the image to be displayed, determining image display position information of the target sub-image and an output node for processing the target sub-image according to the segmentation mode;
and sending the attribute information of the target sub-image to the output node, wherein the attribute information comprises the image display position information and the sub-image identifier of the target sub-image, so that the output node displays the target sub-image corresponding to the sub-image identifier according to the image display position information.
6. The method of claim 5,
the segmentation method for acquiring the image to be displayed comprises the following steps:
acquiring a segmentation mode of an image to be displayed aiming at a local source from an input node; alternatively, the first and second electrodes may be,
and acquiring a segmentation mode of the image to be displayed aiming at the network source from the remote equipment.
7. The method of claim 5,
the determining of the image display position information of the target sub-image and the output node for processing the target sub-image according to the segmentation mode comprises the following steps:
determining a display area of the target sub-image according to the segmentation mode;
if the display area of the display unit and the display area of the target sub-image have an overlapping area, determining an output node corresponding to the display unit as an output node for processing the target sub-image;
and determining the image display position information of the target sub-image according to the overlapping area.
8. The method of claim 7, further comprising:
determining segmentation information of the target sub-image according to the overlapping area;
wherein the attribute information of the target sub-image further includes segmentation information of the target sub-image.
9. An image display method, characterized in that the method comprises:
acquiring an image to be displayed and a segmentation mode of the image to be displayed;
dividing the image to be displayed into M x N sub-images according to the division mode;
sending the division mode to a main control device, so that the main control device determines image display position information of a target sub-image and an output node for processing the target sub-image according to the division mode, and sends attribute information of the target sub-image to the output node, wherein the attribute information comprises the image display position information and a sub-image identifier of the target sub-image;
and sending the coded bit stream aiming at the target sub-image to the output node, so that the output node obtains the target sub-image corresponding to the sub-image identifier according to the coded bit stream, and displaying the target sub-image according to the image display position information.
10. The method of claim 9,
the segmenting the image to be displayed into M × N sub-images according to the segmentation mode includes:
determining the resolution of the image to be displayed; and if the resolution is greater than a preset resolution threshold, dividing the image to be displayed into M x N sub-images according to the dividing mode.
11. An image display apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring the attribute information of the target sub-image; the attribute information of the target sub-image comprises image display position information and a sub-image identifier of the target sub-image; acquiring a target sub-image corresponding to the sub-image identifier; the target sub-image is a sub-image in an image to be displayed, the image to be displayed is divided into M × N sub-images, and M and N are positive integers;
and the display module is used for displaying the target sub-image according to the image display position information.
12. An image display apparatus, characterized in that the apparatus comprises:
the image display device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a segmentation mode of an image to be displayed, and the segmentation mode represents that the image to be displayed is segmented into M × N sub-images; wherein M is a positive integer, and N is a positive integer;
the determining module is used for determining image display position information of a target sub-image and an output node for processing the target sub-image according to the segmentation mode aiming at the target sub-image in the image to be displayed;
and the sending module is used for sending the attribute information of the target sub-image to the output node, wherein the attribute information comprises the image display position information and the sub-image identifier of the target sub-image, so that the output node displays the target sub-image corresponding to the sub-image identifier according to the image display position information.
13. An output node, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
acquiring attribute information of a target sub-image; the attribute information of the target sub-image comprises image display position information and a sub-image identifier of the target sub-image;
acquiring a target sub-image corresponding to the sub-image identifier; the target sub-image is a sub-image in an image to be displayed, the image to be displayed is divided into M × N sub-images, and M and N are positive integers;
and displaying the target sub-image according to the image display position information.
14. A master device, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
acquiring a segmentation mode of an image to be displayed, wherein the segmentation mode represents that the image to be displayed is segmented into M × N sub-images; wherein M is a positive integer, and N is a positive integer;
aiming at a target sub-image in the image to be displayed, determining image display position information of the target sub-image and an output node for processing the target sub-image according to the segmentation mode;
and sending the attribute information of the target sub-image to the output node, wherein the attribute information comprises the image display position information and the sub-image identifier of the target sub-image, so that the output node displays the target sub-image corresponding to the sub-image identifier according to the image display position information.
CN202110157083.5A 2020-03-12 2021-02-04 Image display method, device and equipment Active CN113395564B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020101709580 2020-03-12
CN202010170958 2020-03-12

Publications (2)

Publication Number Publication Date
CN113395564A true CN113395564A (en) 2021-09-14
CN113395564B CN113395564B (en) 2023-05-26

Family

ID=77616796

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110157083.5A Active CN113395564B (en) 2020-03-12 2021-02-04 Image display method, device and equipment

Country Status (1)

Country Link
CN (1) CN113395564B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114170940A (en) * 2021-12-02 2022-03-11 武汉华星光电技术有限公司 Display device, display system and distributed function system
CN115426518A (en) * 2022-08-09 2022-12-02 杭州海康威视数字技术股份有限公司 Display control system, image display method and LED display control system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102547205A (en) * 2011-12-21 2012-07-04 广东威创视讯科技股份有限公司 Method and system for displaying ultra-high resolution image
CN108040245A (en) * 2017-11-08 2018-05-15 深圳康得新智能显示科技有限公司 Methods of exhibiting, system and the device of 3-D view
CN109062525A (en) * 2018-07-11 2018-12-21 深圳市东微智能科技股份有限公司 Data processing method, device and the computer equipment of splice displaying system
CN109218656A (en) * 2017-06-30 2019-01-15 杭州海康威视数字技术股份有限公司 Image display method, apparatus and system
CN109640026A (en) * 2018-12-26 2019-04-16 威创集团股份有限公司 A kind of high-resolution signal source spell wall display methods, device and equipment
CN109669654A (en) * 2018-12-22 2019-04-23 威创集团股份有限公司 A kind of combination display methods and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102547205A (en) * 2011-12-21 2012-07-04 广东威创视讯科技股份有限公司 Method and system for displaying ultra-high resolution image
CN109218656A (en) * 2017-06-30 2019-01-15 杭州海康威视数字技术股份有限公司 Image display method, apparatus and system
CN108040245A (en) * 2017-11-08 2018-05-15 深圳康得新智能显示科技有限公司 Methods of exhibiting, system and the device of 3-D view
CN109062525A (en) * 2018-07-11 2018-12-21 深圳市东微智能科技股份有限公司 Data processing method, device and the computer equipment of splice displaying system
CN109669654A (en) * 2018-12-22 2019-04-23 威创集团股份有限公司 A kind of combination display methods and device
CN109640026A (en) * 2018-12-26 2019-04-16 威创集团股份有限公司 A kind of high-resolution signal source spell wall display methods, device and equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114170940A (en) * 2021-12-02 2022-03-11 武汉华星光电技术有限公司 Display device, display system and distributed function system
WO2023097832A1 (en) * 2021-12-02 2023-06-08 武汉华星光电技术有限公司 Display device, display system, and distributed function system
CN115426518A (en) * 2022-08-09 2022-12-02 杭州海康威视数字技术股份有限公司 Display control system, image display method and LED display control system
CN115426518B (en) * 2022-08-09 2023-08-25 杭州海康威视数字技术股份有限公司 Display control system, image display method and LED display control system

Also Published As

Publication number Publication date
CN113395564B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
US20210104075A1 (en) Predictive Coding For Point Cloud Compression
RU2506715C2 (en) Transmission of variable visual content
US20030063675A1 (en) Image data providing system and method thereof
US20210227236A1 (en) Scalability of multi-directional video streaming
CN113395564B (en) Image display method, device and equipment
JP2001507541A (en) Sprite-based video coding system
US11089283B2 (en) Generating time slice video
KR101882596B1 (en) Bitstream generation and processing methods and devices and system
US20200259880A1 (en) Data processing method and apparatus
JP2016506139A (en) Method and apparatus for reducing digital video image data
CN111818295B (en) Image acquisition method and device
KR100746005B1 (en) Apparatus and method for managing multipurpose video streaming
CN114222166B (en) Multi-channel video code stream real-time processing and on-screen playing method and related system
WO2021093882A1 (en) Video meeting method, meeting terminal, server, and storage medium
CN113259729B (en) Data switching method, server, system and storage medium
JP5172874B2 (en) Video synchronization apparatus, video display apparatus, video synchronization method, and program
CN107734278B (en) Video playback method and related device
JP2007013697A (en) Image receiver and image receiving method
KR102218187B1 (en) Apparatus and method for providing video contents
CN114157903A (en) Redirection method, redirection device, redirection equipment, storage medium and program product
US20170048532A1 (en) Processing encoded bitstreams to improve memory utilization
CN112114760A (en) Image processing method and device
CN116760986B (en) Candidate motion vector generation method, candidate motion vector generation device, computer equipment and storage medium
CN112738056B (en) Encoding and decoding method and system
US20220141469A1 (en) Method and apparatus for constructing motion information list in video encoding and decoding and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant