CN112019853A - Image processing method, system, device, storage medium and processor - Google Patents

Image processing method, system, device, storage medium and processor Download PDF

Info

Publication number
CN112019853A
CN112019853A CN202010904660.8A CN202010904660A CN112019853A CN 112019853 A CN112019853 A CN 112019853A CN 202010904660 A CN202010904660 A CN 202010904660A CN 112019853 A CN112019853 A CN 112019853A
Authority
CN
China
Prior art keywords
data
target
sub
image
coded data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010904660.8A
Other languages
Chinese (zh)
Inventor
张凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Wanxiang Electronics Technology Co Ltd
Original Assignee
Xian Wanxiang Electronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Wanxiang Electronics Technology Co Ltd filed Critical Xian Wanxiang Electronics Technology Co Ltd
Priority to CN202010904660.8A priority Critical patent/CN112019853A/en
Publication of CN112019853A publication Critical patent/CN112019853A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/20Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses an image processing method, a system, a device, a storage medium and a processor. Wherein, the method comprises the following steps: respectively acquiring a frame of image from each sending end so as to obtain a plurality of frames of images, wherein the images acquired from each sending end are acquired from image source equipment connected with the sending ends; coding a plurality of frames of images to obtain target coded data, wherein the target coded data comprise a plurality of sub-coded data which correspond to the plurality of frames of images one by one, and the sub-coded data are provided with target identifiers; and sending the target coded data to a receiving end server, wherein the target identification is used for enabling the receiving end server to split corresponding sub-coded data from the target coded data, the sub-coded data are sent to the corresponding receiving end to be decoded, and an image obtained through decoding is displayed by a display device connected with the receiving end. The invention solves the technical problem that multi-frame images are difficult to synchronously display.

Description

Image processing method, system, device, storage medium and processor
Technical Field
The present invention relates to the field of image processing, and in particular, to an image processing method, system, apparatus, storage medium, and processor.
Background
Currently, each image source device generally directly transmits each image to a management server, and the management server transmits the received image of each image source device to a corresponding screen, so as to display the whole image composed of each image on a display device composed of a plurality of screens.
However, due to the delay of the transmission network, especially in the case of a congested network, the synchronization of the images displayed by the display device may occur, for example, a part of the screen of the display device displays the image, and the other screens of the display device do not display the image; for another example, a current frame image is displayed on a part of screens of the display device, and a previous frame image is displayed on other screens of the display device, so that the technical problem that a plurality of frame images are difficult to be synchronously displayed exists.
In view of the above technical problem that it is difficult to perform synchronous display on multiple frames of images, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing system, an image processing device, a storage medium and a processor, which are used for at least solving the technical problem that a plurality of frames of images are difficult to synchronously display.
According to an aspect of an embodiment of the present invention, there is provided an image processing method. The method can comprise the following steps: respectively acquiring a frame of image acquired from each transmitting end so as to obtain a plurality of frames of images, wherein the images acquired from each transmitting end are acquired from image source equipment connected with the transmitting ends; coding a plurality of frames of images to obtain target coded data, wherein the target coded data comprise a plurality of sub-coded data which correspond to the plurality of frames of images one by one, and the sub-coded data are provided with target identifiers; and sending the target coded data to a receiving end server, wherein the target identification is used for enabling the receiving end server to split corresponding sub-coded data from the target coded data, the sub-coded data are sent to the corresponding receiving end to be decoded, and an image obtained through decoding is displayed by a display device connected with the receiving end.
Optionally, after converting the plurality of frame images into the frame image data, the method further comprises: an object identification corresponding to each sub-coded data is set in the frame image data.
Optionally, in subframe image data corresponding to each image in the frame image data, setting a target identifier of the sub-coded data corresponding to each image, including at least one of: adding a target mark of each subframe image data into the header information of the frame image data, wherein the target marks are arranged in the header information according to the arrangement sequence of the subframe image data, simultaneously adding a start marker of the current subframe image data to the header information of each subframe image data, and setting an end marker on the tail data of each subframe image data; adding a target mark of each subframe image data and data length information of each subframe image data into header information of the frame image data; setting a corresponding target identifier on the first data of the sub-frame image data; respectively setting corresponding target marks on the head data and the tail data of the sub-frame image data; and setting a corresponding target mark on each data of the sub-frame image data.
Optionally, encoding the frame image data to obtain target encoded data, including: coding each subframe image data in the frame image data to obtain a plurality of sub-coded data; and generating target coded data by using the plurality of sub-coded data respectively provided with the target identification.
According to another aspect of the embodiments of the present invention, there is also provided another image processing method. The method can comprise the following steps: receiving target coded data, wherein the target coded data are obtained by coding a multi-frame image by an acquisition end server and comprise a plurality of sub-coded data which correspond to the multi-frame image one by one, the multi-frame image is acquired by the acquisition end server from each transmitting end, the image acquired from each transmitting end is acquired from image source equipment connected with the transmitting end, and the sub-coded data are provided with target identifiers; splitting corresponding sub-coded data from the target coded data based on the target identification; and sending the sub-coded data to a corresponding receiving end for decoding to obtain an image, wherein the image is displayed by a display device connected with the receiving end.
Optionally, before sending the sub-coded data to a corresponding receiving end for decoding to obtain an image, the method further includes: and determining a receiving end corresponding to the target identification.
Optionally, splitting the corresponding sub-coded data from the target coded data based on the target identifier includes: determining the position of the target mark in the target coded data; storing data associated with the position in the target coded data into a data packet; the data packet is determined to be sub-coded data.
According to another aspect of the embodiment of the invention, an image processing system is also provided. The system may include: the image source devices are respectively used for generating images to obtain multi-frame images; the image source equipment comprises a plurality of sending ends, a plurality of image source equipment and a plurality of receiving ends, wherein the sending ends are connected with the image source equipment in a one-to-one correspondence manner and are used for respectively collecting multi-frame images; the acquisition end server is connected with the plurality of sending ends and used for coding the multi-frame images to obtain target coded data, wherein the target coded data comprise a plurality of sub-coded data which correspond to the multi-frame images one by one, and the sub-coded data are provided with target identifiers; the receiving end server is connected with the acquisition end server and is used for splitting corresponding sub-coded data from the target coded data based on the target identification; the receiving terminals are connected with the receiving terminal server, wherein each receiving terminal is used for decoding the corresponding sub-coded data to obtain an image; and the display equipment is connected with each receiving end and used for displaying images.
According to another aspect of the embodiments of the present invention, there is also provided an image processing apparatus. The apparatus may include: the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for respectively acquiring a frame of image from each sending end so as to obtain a plurality of frames of images, and the images acquired from each sending end are acquired from image source equipment connected with the sending ends; the encoding unit is used for encoding the multi-frame images to obtain target encoded data, wherein the target encoded data comprise a plurality of sub-encoded data which correspond to the multi-frame images one by one, and the sub-encoded data are provided with target identifiers; and the sending unit is used for sending the target coded data to the receiving end server, wherein the target identification is used for enabling the receiving end server to split corresponding sub-coded data from the target coded data, the sub-coded data are sent to the corresponding receiving end to be decoded, and the decoded image is displayed by a display device connected with the receiving end.
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium. The computer readable storage medium includes a stored program, wherein the apparatus in which the computer readable storage medium is located is controlled to execute the image processing method according to the embodiment of the present invention when the program runs.
According to another aspect of the embodiments of the present invention, there is also provided a processor. The processor is used for running a program, wherein the program executes the image processing method of the embodiment of the invention when running.
In the embodiment of the invention, one frame of image is acquired from each sending end respectively, so that a plurality of frames of images are obtained, wherein the images acquired from each sending end are acquired from image source equipment connected with the sending ends; coding a plurality of frames of images to obtain target coded data, wherein the target coded data comprise a plurality of sub-coded data which correspond to the plurality of frames of images one by one, and the sub-coded data are provided with target identifiers; and sending the target coded data to a receiving end server, wherein the target identification is used for enabling the receiving end server to split corresponding sub-coded data from the target coded data, the sub-coded data are sent to the corresponding receiving end to be decoded, and an image obtained through decoding is displayed by a display device connected with the receiving end. That is to say, the present application encodes all images to obtain target encoded data, so that, no matter how the network transmission quality is, or the bandwidth is limited, the receiving end server receives the target encoded data of all images, each target identifier in the target encoded data is used for enabling the receiving end server to split corresponding sub-encoded data from the target encoded data, and each sub-encoded data is sent to a corresponding receiving end for decoding, each image obtained by decoding is synchronously displayed by a display device connected with the receiving end, so that, under the condition that the network transmission quality is very poor, the receiving end server can receive the sub-encoded data of all images, thereby avoiding the problem that multiple images are not synchronous due to time delay or other transmission problems, and solving the technical problem that multiple images are difficult to be synchronously displayed, the technical effect of synchronous display of the multi-frame images is achieved, and the technical problem that the multi-frame images are difficult to synchronously display is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of an image processing system according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of image processing according to an embodiment of the present invention;
FIG. 3 is a flow diagram of another image processing method according to an embodiment of the invention;
fig. 4 is a schematic view of a large screen system according to the related art;
FIG. 5 is a schematic diagram of a large screen system according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of setting target identifiers on the head data and the tail data of sub-frame image data according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of setting a target mark on each data of sub-frame image data according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention; and
fig. 9 is a schematic diagram of another image processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
According to an embodiment of the present invention, there is provided an image processing system.
FIG. 1 is a schematic diagram of an image processing system according to an embodiment of the present invention. As shown in fig. 1, the image processing system 10 may include: a plurality of image source devices 11, a plurality of transmitting terminals 12, a collecting terminal server 13, a receiving terminal server 14, a receiving terminal 15, and a display device 16.
The image source devices 11 are respectively used for generating images to obtain multi-frame images.
In this embodiment, the images respectively generated by the plurality of image source devices 11 may be desktop images, and the images may be referred to as image data, so that the desktop images may also be referred to as desktop image data. The image source device 11 may be a Personal Computer (PC), a notebook, a tablet, a camera, etc., and is not limited herein.
And the plurality of sending ends 12 are connected with the plurality of image source devices 11 in a one-to-one correspondence manner and are used for respectively collecting the plurality of frames of images.
The image processing system of this embodiment may include a plurality of transmitting terminals 12(S) connected to the plurality of image source devices 11 in a one-to-one correspondence, where each transmitting terminal 12 is configured to acquire an image of the image source device 11 connected thereto and transmit the acquired image to the acquiring terminal server 13, for example, the image processing system of this embodiment is provided with 4 image source devices 11, and one of the transmitting terminals is provided with 4 transmitting terminals S1 to S4, so that the transmitting terminals S1 to S4 are respectively configured to acquire images of the 4 image source devices 11. Alternatively, the transmitting end of this embodiment may be built in the image source device 11, or may be external to the image source device 11.
It should be noted that the transmitting end of the embodiment is only used for acquiring images.
And the acquisition end server 13 is connected with the plurality of sending ends 12 and is used for coding the multi-frame images to obtain target coded data, wherein the target coded data comprise a plurality of sub-coded data which correspond to the multi-frame images one by one, and the sub-coded data are provided with target identifiers.
In this implementation, the acquisition-side server 13 may receive multiple frames of images transmitted by multiple transmitting sides 12. The acquisition-side server 13 in this embodiment may be configured to encode multiple received frames of images, and may combine data of multiple frames of images into a whole frame of image data, and splice data of all received images into a whole frame of image data, where the splicing manner of all the images may be random, and further encode the whole frame of image data, so as to generate target encoded data, where the target encoded data may also be referred to as packed encoded data or large image encoded data.
In this embodiment, a target flag corresponding to each sub-coded data may be set in the frame image data, and there may be the following various ways.
Optionally, a target identifier of each subframe image data is added to the header information of the frame image data, wherein the target identifiers are arranged in the header information according to the arrangement sequence of the subframe image data, a start marker of the current subframe image data is added to the header information of each subframe image data, and an end marker is set on the tail information of each subframe image data.
In this embodiment, the frame image data, that is, the current frame image data, may include header information, the embodiment may add a target identifier of each subframe image data in the header information, and the target identifiers of each subframe image data may be arranged in the header information according to a certain order, and optionally, the target identifiers of each subframe image data of the embodiment may be arranged in the header information according to the arrangement order of the subframe image data. The embodiment can further add a start marker of the current sub-frame image data to the head data of each sub-frame image data, and also set an end marker to the tail data of each sub-frame image data to mark each sub-frame image data, thereby achieving the purpose of setting a target identifier corresponding to each sub-coded data in the frame image data.
Alternatively, the target identifier of each sub-frame image data and the data length information of each sub-frame image data are added to the header information of the frame image data.
In this embodiment, the receiving end server 14 may first obtain the data length information of each subframe image data, and then truncate the code stream according to the data length information of each subframe image data, so as to obtain each subframe image data. After obtaining each sub-frame image data, the embodiment can add the target identifier of each sub-frame image data in the header information of the current frame image data, and can also add the data length information of each sub-frame image data, thereby achieving the purpose of setting the target identifier corresponding to each sub-coded data in the frame image data.
Alternatively, the embodiment may set a target flag on the header data of the sub-frame image data of each image, and set a different target flag on the header data of different sub-frame image data. It should be noted that this method cannot unpack the sub-frame image data of the same image.
Optionally, in this embodiment, target identifiers may be set on both the head data and the tail data of the sub-frame image data of each image, the same target identifier may be set, the head data of different sub-frame image data sets different target identifiers, and the head data of different sub-frame image data sets different target identifiers. It should be noted that this method cannot unpack the sub-frame image data of the same image.
Optionally, the implementation may also set a target identifier on each data in the subframe image data of each image, which may be the same target identifier. It should be noted that the sub-frame image data of the multi-frame image in the method may be subjected to inter-splicing.
Then, the acquisition end server 13 may encode the spliced whole frame of image data, and in the encoding process, when a target identifier is encountered, a target identifier is correspondingly set in the sub-encoded data, where the target identifier is also a mark (or a label mark), and may be an image Identifier (ID), and then a plurality of sub-encoded data respectively provided with the target identifier are generated into the target encoded data. Setting a target mark on the first data of the sub-frame image data of each image, and correspondingly setting the target mark on the first data of the sub-coded data of each image; target marks are arranged on the head data and the tail data of the sub-frame image data of each image, and correspondingly, the target marks are arranged on the head data and the tail data of the sub-coded data of each image; and setting a target identifier on each data in the sub-frame image data of each image, and correspondingly setting the target identifier on each data of the sub-coded data of each image.
In this embodiment, the target encoded data includes a plurality of sub-encoded data corresponding to a plurality of images, the sub-encoded data may also be referred to as image-corresponding small image encoded data, and the sub-encoded data corresponding to each image is provided with the target identifier, which may be used to uniquely identify the corresponding sub-encoded data.
It should be noted that, in the target encoded data of this embodiment, the sub-encoded data corresponding to each image can be intercepted by the target identifier set in the target encoded data.
And the receiving end server 14 is connected with the acquisition end server 13 and is used for splitting corresponding sub-coded data from the target coded data based on the target identifier.
In this embodiment, the receiving end server 14 is configured to receive the target encoded data sent by the collecting end server 13, extract the target identifier from the target encoded data, and then split the corresponding sub-encoded data from the target encoded data according to the target identifier, that is, split the corresponding sub-encoded data from the target encoded data according to the target identifier of each sub-encoded data, so as to obtain a plurality of sub-encoded data.
In this embodiment, the splitting of the corresponding sub-coded data from the target coded data based on the target identifier by the receiving end server 14 may include the following several ways.
Optionally, in a case where a target identifier is set at a position corresponding to the first data of the sub-coded data, when a target identifier is found in the target coded data, a data packet is newly created, the data where the target identifier is located in the target coded data, and subsequent data in the target coded data are put into the data packet until another target identifier is found, and the data packet is determined as the sub-coded data corresponding to the target identifier.
Optionally, in a case that target identifiers are set at positions corresponding to the first data and the last data of the sub-coded data, when a new target identifier is found in the target coded data, a data packet is newly created, and the data where the target identifier is located in the target coded data and the subsequent data in the target coded data are put into the data packet; when the target identifier is found in the target coded data again, the data of the target identifier found again in the target coded data is the last data in the current sub-coded data, the data of the target identifier is put into a data packet, the data packet is determined to be the corresponding sub-coded data, and a next new data packet is prepared to be established.
Optionally, under the condition that the positions of all data of the sub-coded data are provided with the target identifiers, when one target identifier is found in the target coded data, it may be determined whether a data packet of the target identifier already exists currently, if so, the data of the target identifier in the target coded data is put into the data packet, if not, another data packet is newly created, the data of the target identifier in the target coded data is put into the newly created data packet, and so on, until all data in the target coded data are put into the corresponding data packet, and each data packet is each sub-coded data.
And a plurality of receiving terminals 15 connected to the receiving terminal server 14, wherein each receiving terminal 15 is configured to decode the corresponding sub-coded data to obtain an image.
The receiving end (R)15 of this embodiment may correspond to the above target identifier, so that a plurality of target identifiers may correspond to a plurality of receiving ends 15, and the corresponding relationship between the receiving end and the target identifier may be preset, for example, target identifier 1 corresponds to receiving end R2, target identifier 2 corresponds to receiving end R1, target identifier 3 corresponds to receiving end R3, and target identifier 4 corresponds to receiving end R4.
Each receiving end 15 of this embodiment may be configured to decode the corresponding sub-encoded data, thereby obtaining a corresponding image. Therefore, the receiving end server 14 of this embodiment may send the sub-encoded data identified by the target identifier 1 to the receiving end R2 for decoding to obtain a corresponding image, send the sub-encoded data identified by the target identifier 2 to the receiving end R1 for decoding to obtain a corresponding image, send the sub-encoded data identified by the target identifier 3 to the receiving end R3 for decoding to obtain a corresponding image, and send the sub-encoded data identified by the target identifier 4 to the receiving end R4 for decoding to obtain a corresponding image.
And a display device 16 connected to each receiving terminal 15 for displaying an image.
The display device 16 of this embodiment may include a plurality of screens, where each screen may correspond to an image corresponding to one piece of sub-encoded data, so that this embodiment may display the corresponding image in each screen in the display device 16.
The image processing system of this embodiment may also be referred to as a large screen system, and since the acquisition end server encodes all the images to obtain the target encoded data, no matter how the network transmission quality is or the bandwidth is limited, the receiving end server receives the target encoded data of all the images, each target identifier in the target encoded data is used to enable the receiving end server to split corresponding sub-encoded data from the target encoded data, and each sub-encoded data is sent to a corresponding receiving end to be decoded, and each decoded image is synchronously displayed by the display device connected to the receiving end. That is, no matter how the network transmission quality is, or the bandwidth is limited, even under the condition that the network transmission quality is very poor, the receiving end server can receive the sub-encoded data of all the images, or cannot receive the sub-encoded data of any image, so that the problem of multi-image asynchronization caused by time delay or other transmission problems can be avoided. It should be noted that, in a scenario where the encoding manner is progressive, if the transmission quality is poor, the receiving end server may receive encoded data of part of layers of all images, and all screens of the display device may display a blurred image.
Example 2
According to an embodiment of the present invention, there is provided an embodiment of an image processing method, it should be noted that the steps shown in the flowchart of the drawings may be executed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases, the steps shown or described may be executed in an order different from that here.
The following describes an image processing method according to an embodiment of the present invention from the acquisition side server.
Fig. 2 is a flowchart of an image processing method according to an embodiment of the present invention. As shown in fig. 2, the method may include the steps of:
step S202, acquiring a frame of image from each sending end respectively, thereby obtaining a plurality of frames of images.
In the technical solution provided by step S202 of the present invention, the image acquired from each transmitting end is acquired from an image source device connected to the transmitting end, and the image may also be a desktop image, so that the desktop image may also be referred to as desktop image data. The image source device may be a PC, a notebook, a tablet, a camera, etc., and is not limited herein. According to the embodiment, the acquisition end server can acquire one frame of image from each transmitting end respectively, so that multiple frames of images are obtained.
In this embodiment, the plurality of first transmitting terminals correspond to the plurality of image source devices one to one. The collecting end server is connected with a plurality of image source devices through a plurality of transmitting ends, wherein each transmitting end has the function of collecting images of the image source devices connected with the transmitting end and transmitting the collected images to the collecting end server, for example, 4 transmitting ends S1 to S4 corresponding to 4 image source devices one by one are arranged on the image source device side in the embodiment, and the transmitting ends S1 to S4 are respectively used for collecting images of the 4 image source devices, so as to obtain multi-frame images.
It should be noted that the sending end of this embodiment is only used for acquiring an image, and the acquiring end server may be used for encoding a received multi-frame image.
And S204, coding the multi-frame images to obtain target coded data, wherein the target coded data comprise a plurality of sub-coded data which correspond to the multi-frame images one by one, and the sub-coded data are provided with target identifiers.
In the technical solution provided in step S204 of the present invention, after obtaining one frame of image from each sending end, so as to obtain multiple frames of images, the multiple frames of images may be encoded, so as to obtain target encoded data, where the target encoded data includes multiple sub-encoded data corresponding to the multiple frames of images one to one, and the sub-encoded data is provided with a target identifier, which may also be referred to as image-corresponding small image encoded data.
In this embodiment, the acquisition-side server may encode a plurality of frames of images, and may encode the plurality of frames of images as a whole frame of image data, so as to obtain target encoded data, which is also packed encoded data and large-image encoded data. In this embodiment, the target encoded data is composed of a plurality of sub-encoded data corresponding to the plurality of frames of images one to one, and each sub-encoded data is provided with a corresponding target identifier, which can be used to uniquely label the sub-encoded data.
It should be noted that, in the target encoded data of this embodiment, the sub-encoded data corresponding to each image can be intercepted by the target identifier set in the target encoded data.
And step S206, transmitting the target coded data to a receiving end server, wherein the target identification is used for enabling the receiving end server to split corresponding sub-coded data from the target coded data, and transmitting the sub-coded data to the corresponding receiving end for decoding, and the image obtained by decoding is displayed by a display device connected with the receiving end.
In the technical solution provided in step S206 of the present invention, after the multi-frame image is encoded to obtain the target encoded data, the collecting end server may send the target encoded data to the receiving end server, and the target identifier may be extracted from the target encoded data by the receiving end server, and then the receiving end server splits corresponding sub-encoded data from the target encoded data according to the target identifier, that is, splits corresponding sub-encoded data from the target encoded data according to the target identifier of each sub-encoded data, so as to obtain a plurality of sub-encoded data.
In this embodiment, the receiving end corresponding to the target identifier may be determined according to an allocation rule. The distribution rule includes a correspondence between the target identifier and the receiving end, so that a plurality of target identifiers of a plurality of sub-coded data can correspond to a plurality of receiving ends, and the correspondence between the target identifiers and the receiving ends can be preset, for example, target identifier 1 corresponds to receiving end R2, target identifier 2 corresponds to receiving end R1, target identifier 3 corresponds to receiving end R3, and target identifier 4 corresponds to receiving end R4.
The sub-coded data of this embodiment can be decoded by the corresponding receiving end to obtain the corresponding image. Therefore, the sub-coded data identified by the target identifier 1 in this embodiment can be sent from the receiving end server to the receiving end R2 for decoding, so as to obtain a corresponding image, the sub-coded data identified by the target identifier 2 can be sent from the receiving end server to the receiving end R1 for decoding, so as to obtain a corresponding image, the sub-coded data identified by the target identifier 3 can be sent from the receiving end server to the receiving end R3 for decoding, so as to obtain a corresponding image, and the sub-coded data identified by the target identifier 4 can be sent from the receiving end server to the receiving end R4 for decoding, so as to obtain a corresponding image.
The image obtained by decoding the sub-encoded data through the receiving end of the embodiment may be displayed by a display device connected to the receiving end, where the display device may include a plurality of screens, and each screen may correspond to an image corresponding to one sub-encoded data, so that the embodiment may synchronously display corresponding images in each screen in the display device.
Through the steps S202 to S206, a frame of image is obtained from each transmitting end, so as to obtain a plurality of frames of images, wherein the images obtained from each transmitting end are collected from image source equipment connected with the transmitting end; coding a plurality of frames of images to obtain target coded data, wherein the target coded data comprise a plurality of sub-coded data which correspond to the plurality of frames of images one by one, and the sub-coded data are provided with target identifiers; and sending the target coded data to a receiving end server, wherein the target identification is used for enabling the receiving end server to split corresponding sub-coded data from the target coded data, the sub-coded data are sent to the corresponding receiving end to be decoded, and an image obtained through decoding is displayed by a display device connected with the receiving end. That is to say, in this embodiment, all images are encoded to obtain target encoded data, so, no matter how the network transmission quality is, or the bandwidth is limited, the receiving end server receives the target encoded data of all images, each target identifier in the target encoded data is used for enabling the receiving end server to split corresponding sub-encoded data from the target encoded data, and each sub-encoded data is sent to a corresponding receiving end to be decoded, each image obtained by decoding is synchronously displayed by a display device connected with the receiving end, so that, under the condition that the network transmission quality is very poor, the receiving end server can receive the sub-encoded data of all images, thereby avoiding the problem that multiple images are not synchronous due to time delay or other transmission problems, and solving the technical problem that synchronous display of multiple images is difficult, the technical effect of synchronous display of the multi-frame images is achieved.
The above-described method of this embodiment is further described below.
As an optional implementation manner, encoding multiple frames of images to obtain target encoded data includes: converting a plurality of frame images into frame image data; and coding the frame image data to obtain target coded data.
In this embodiment, when encoding multiple frames of images is implemented to obtain encoded data, the data of the multiple frames of images may be spliced to obtain frame image data, which is also a whole frame of image data. Optionally, when the data of the multiple frames of images are spliced, the data of the multiple frames of images may be randomly spliced, and then the frame image data is encoded to generate the target encoded data.
As an alternative embodiment, after converting the plurality of frame images into the frame image data, the method further includes: an object identification corresponding to each sub-coded data is set in the frame image data.
In this embodiment, each piece of sub-coded data has a corresponding target identifier, and this embodiment may set a target identifier corresponding to each piece of sub-coded data in the frame image data, and the setting of a target identifier corresponding to each piece of sub-coded data in the frame image data of this embodiment is further described below.
As an alternative embodiment, setting a target identifier corresponding to each sub-coded data in the frame image data includes at least one of: adding a target mark of each subframe image data into the header information of the frame image data, wherein the target marks are arranged in the header information according to the arrangement sequence of the subframe image data, simultaneously adding a start marker of the current subframe image data to the header information of each subframe image data, and setting an end marker on the tail data of each subframe image data; adding a target mark of each subframe image data and data length information of each subframe image data into header information of the frame image data; setting a corresponding target identifier on the first data of the sub-frame image data; respectively setting corresponding target marks on the head data and the tail data of the sub-frame image data; and setting a corresponding target mark on each data of the sub-frame image data.
In this embodiment, the frame image data, that is, the current frame image data, includes header information, the embodiment may add a target identifier of each subframe image data in the header information, and the target identifiers of each subframe image data may be arranged in the header information according to a certain order, and optionally, the target identifiers of each subframe image data of the embodiment may be arranged in the header information according to the arrangement order of the subframe image data. The embodiment can further add a start marker of the current sub-frame image data to the head data of each sub-frame image data, and also set an end marker to the tail data of each sub-frame image data to mark each sub-frame image data, thereby achieving the purpose of setting a target identifier corresponding to each sub-coded data in the frame image data.
In this embodiment, the receiving end server may first obtain the data length information of each sub-frame image data, and then truncate the code stream according to the data length information of each sub-frame image data, so as to obtain each sub-frame image data. After obtaining each sub-frame image data, the embodiment can add the target identifier of each sub-frame image data in the header information of the current frame image data, and can also add the data length information of each sub-frame image data, thereby achieving the purpose of setting the target identifier corresponding to each sub-coded data in the frame image data.
In this embodiment, a corresponding target identifier may be set on the header data of the sub-frame image data of each image, and different target identifiers may be set on the header data of different sub-frame image data. It should be noted that this method cannot unpack the sub-frame image data of the same image.
Optionally, in this embodiment, target identifiers may be set on both the head data and the tail data of the sub-frame image data of each image, the same target identifier may be set, the head data of different sub-frame image data sets different target identifiers, and the head data of different sub-frame image data sets different target identifiers. It should be noted that this method cannot unpack the sub-frame image data of the same image.
Optionally, the implementation may also set a target identifier on each data in the subframe image data of each image, which may be the same target identifier. It should be noted that the sub-frame image data of the multi-frame image in the method may be subjected to inter-splicing.
As an optional implementation, encoding frame image data to obtain target encoded data includes: respectively coding a plurality of sub-frame image data in the frame image data to obtain a plurality of sub-coded data; and generating target coded data by using the plurality of sub-coded data respectively provided with the target identification.
When the embodiment is used for coding the frame image data to obtain the target coded data, the acquisition end server can code the spliced frame image data and can respectively code a plurality of subframe image data to obtain a plurality of sub-coded data. In the encoding process, when a target identifier is encountered, a target identifier is also set in a corresponding position of the sub-encoded data, and then the target encoded data is generated by a plurality of sub-encoded data respectively provided with the target identifier.
Optionally, in this embodiment, a target identifier is set on the first data of the sub-frame image data of each image, and correspondingly, the target identifier is set on the first data of the sub-frame image data of each image; target marks are arranged on the head data and the tail data of the sub-frame image data of each image, and correspondingly, the target marks are arranged on the head data and the tail data of the sub-coded data of each image; and setting a target identifier on each data in the sub-frame image data of each image, and correspondingly setting the target identifier on each data of the sub-coded data of each image.
In the embodiment, because the encoding of all the images is completed at the acquisition end server side, and the acquisition end server sends the target encoded data to the receiving end server, even if the transmission network is congested, all the images received by the receiving end server can be ensured, all the images can be synchronously displayed on each screen of the display device, and the technical problem that the display of a plurality of desktop images is not synchronous is solved.
The following describes an image processing method according to an embodiment of the present invention from the receiving-end server side.
Fig. 3 is a flowchart of another image processing method according to an embodiment of the present invention. As shown in fig. 3, the method may include the steps of:
step S302, target coding data is received, wherein the target coding data is obtained by coding a multi-frame image by an acquisition end server and comprises a plurality of sub-coding data which correspond to the multi-frame image one by one, the multi-frame image is acquired by the acquisition end server from each transmitting end, the image acquired from each transmitting end is acquired from image source equipment connected with the transmitting end, and the sub-coding data is provided with a target identifier.
In the technical solution provided by step S302 of the present invention, the receiving end server receives the target encoded data, where the encoded data is obtained by encoding a plurality of frames of images by the collecting end server, or the collecting end server takes a plurality of frames of images as a whole frame of image data and then encodes the whole frame of image data, so as to obtain the target encoded data. In this embodiment, the target encoded data is composed of a plurality of sub-encoded data corresponding to the plurality of frames of images one to one, and each sub-encoded data is provided with a corresponding target identifier, which can be used to uniquely label the sub-encoded data. The multi-frame image may be an image generated by a plurality of image source devices, and optionally, the multi-frame image is acquired by the acquisition end server from each transmitting end, where the image acquired from each transmitting end is acquired from the image source device connected to the transmitting end.
And step S304, splitting corresponding sub-coded data from the target coded data based on the target identification.
In the technical solution provided by step S304 of the present invention, in the target encoded data, the sub-encoded data corresponding to each image can be intercepted by the target identifier set in the target encoded data.
The receiving end server of the embodiment extracts a plurality of target identifiers from the target coded data, and then splits corresponding sub-coded data from the target coded data according to the plurality of target identifiers, that is, splits corresponding sub-coded data from the target coded data according to the target identifier of each sub-coded data, thereby obtaining a plurality of sub-coded data.
And S306, sending the sub-coded data to a corresponding receiving end for decoding to obtain an image, wherein the image is displayed by a display device connected with the receiving end.
In the technical solution provided in step S306 of the present invention, after splitting the corresponding sub-coded data from the target coded data based on the target identifier, the sub-coded data may be sent to a corresponding receiving end for decoding, so as to obtain an image.
Optionally, in this embodiment, before the sub-coded data is sent to a corresponding receiving end to be decoded to obtain an image, the receiving end corresponding to the target identifier is determined.
In this embodiment, the target identifier may correspond to a receiving end, so that a plurality of target identifiers of a plurality of sub-coded data may correspond to a plurality of receiving ends, and the corresponding relationship between the target identifier and the receiving end may be preset, for example, target identifier 1 corresponds to receiving end R2, target identifier 1 corresponds to receiving end R2, target identifier 2 corresponds to receiving end R1, target identifier 3 corresponds to receiving end R3, and target identifier 4 corresponds to receiving end R4.
The sub-coded data of this embodiment can be decoded by the corresponding receiving end to obtain the corresponding image. Therefore, the receiving end server of this embodiment may send the sub-encoded data identified by the target identifier 1 to the receiving end R2 for decoding, to obtain a corresponding image, send the sub-encoded data identified by the target identifier 2 to the receiving end R1 for decoding, to obtain a corresponding image, send the sub-encoded data identified by the target identifier 3 to the receiving end R3 for decoding, to obtain a corresponding image, and send the sub-encoded data identified by the target identifier 4 to the receiving end R4 for decoding, to obtain a corresponding image.
The image obtained by decoding the sub-encoded data through the receiving end of the embodiment may be displayed by a display device connected to the receiving end, where the display device may include a plurality of screens, and each screen may correspond to an image corresponding to one sub-encoded data, so that the embodiment may synchronously display corresponding images in each screen in the display device.
As an optional implementation manner, in step S304, splitting corresponding sub-coded data from the target coded data based on the target identifier includes: determining the position of the target mark in the target coded data; storing data associated with the position in the target coded data into a data packet; the data packet is determined to be sub-coded data.
When splitting the corresponding sub-coded data from the target coded data based on the target identifier, the embodiment may determine a position of the target identifier in the target coded data, for example, a position of the target identifier corresponding to a head of the sub-coded data of the target coded data, or a head position corresponding to the head of the sub-coded data of the target coded data and a tail position corresponding to the tail of the sub-coded data, or a position of the target identifier corresponding to all data in the sub-coded data of the target coded data. After the position of the target identifier in the target encoding data is determined, data associated with the position in the target encoding data may be stored in data packets, where each data packet is a sub-encoding data.
The method for storing the data associated with the position in the target encoded data into the data packet is further exemplified below.
As an alternative example, in the case that a target identifier is set at a position corresponding to the first data of the sub-coded data, when a target identifier is found in the target coded data, a data packet may be newly created, the data of the target identifier at the position in the target coded data, and the data subsequent to the position in the target coded data are all put into the data packet until another target identifier is found, and then the data packet is determined as the sub-coded data corresponding to the target identifier.
Optionally, in a case that target identifiers are set at positions corresponding to the first data and the last data of the sub-coded data, when a new target identifier is found at a position in the target coded data, a data packet is newly created, data of the target identifier at the position in the target coded data, and data subsequent to the position in the target coded data are placed in the data packet; when the target identifier is found in the target coded data again, it is indicated that the data of the target identifier found again in the target coded data is the last one in the current sub-coded data, the data at the position of the target identifier is also put into the data packet, the data packet is determined to be the corresponding sub-coded data, and a next new data packet is prepared to be established.
Optionally, when the positions of all data of the sub-coded data are provided with the target identifier, when a position is found to have the target identifier in the target coded data, it may be determined whether a data packet of the target identifier already exists currently, if so, the data at the position of the target identifier in the target coded data is put into the data packet, if not, another data packet is newly created, the data at the position of the target identifier in the target coded data is put into the newly created data packet, and so on, until all data in the target coded data are put into the corresponding data packet, each data packet is each sub-coded data.
In the image processing method of this embodiment, since all images are encoded to obtain the target encoded data, no matter how the network transmission quality is or the bandwidth is limited, the receiving end server receives the target encoded data of all images, each target identifier in the target encoded data is used for enabling the receiving end server to split corresponding sub-encoded data from the target encoded data, and each sub-encoded data is sent to a corresponding receiving end for decoding, and each image obtained by decoding is synchronously displayed by a display device connected to the receiving end. That is to say, under the condition that the network transmission quality is very poor, the receiving end server may receive the sub-coded data of all the images, or may not receive the sub-coded data of any image, so that the technical problem of multi-image non-synchronization caused by time delay or other transmission problems can be solved, thereby achieving the technical effect of multi-frame image synchronization.
Example 3
The technical solutions of the embodiments of the present invention will be further illustrated below with reference to preferred embodiments.
Fig. 4 is a schematic diagram of a large screen system according to the related art. As shown in fig. 4, the large screen system may include: image source device 41, image source device 42, image source device 43, image source device 44, management server 45, and large screen device 46. The image source device 41 is connected to the transmitting terminal S1 ', the image source device 42 is connected to the transmitting terminal S2', the image source device 43 is connected to the transmitting terminal S3 ', and the image source device 44 is connected to the transmitting terminal S4'. The transmitting end S1 ', the transmitting end S2', the transmitting end S3 ', and the transmitting end S4' transmit the acquired image of the image source device to the management server 45, the management server 45 transmits the received image of the image source device 41 to the corresponding receiving end R2 'according to a preset large screen display mode, the received image of the image source device 41 is decoded by the receiving end R2' and displayed on the connected screen a, the received image of the image source device 42 is transmitted to the corresponding receiving end R1 ', the received image of the image source device 42 is decoded by the receiving end R1' and displayed on the connected screen B, the received image of the image source device 43 is transmitted to the corresponding receiving end R3 ', the received by the receiving end R3' and displayed on the connected screen C, and the received image of the image source device 44 is transmitted to the corresponding receiving end R4 ', and decoded by the receiving end R4' and displayed on the connected screen. The images displayed on screen A, B, C and D make up the entire image of large screen device 46.
However, due to the delay of the transmission network, especially in the case of a congested network, the non-synchronization of the displayed image on the upper screen of the large screen device 46 may occur, which affects the user experience. For example, a portion of the screen of the large screen device 46 displays an image, while the other screens do not display an image; for another example, a partial screen number of the large screen device 46 displays a current frame image, and the other screen displays a previous frame image.
In this embodiment, an acquisition end server may be arranged at the image source side, and the acquisition end server combines the images of the plurality of image source devices acquired by each transmitting end to encode as a whole frame image, and generates target encoded data to be transmitted to the receiving end server. In the target coding data, the sub-coding data corresponding to each image is provided with a target identifier. The receiving end server can split the target coded data according to the target identification to obtain sub-coded data of each image, sends each sub-coded data packet to a corresponding receiving end according to a preset distribution rule, decodes the sub-coded data through the receiving end to obtain a corresponding image, and then displays the image on a screen corresponding to the large-screen device connected with the receiving end, so that the multi-frame image large-screen device can synchronously display the image on the screen corresponding to the large-screen device.
Therefore, the acquisition end server of the embodiment combines the images of the image source devices and encodes and transmits the combined images as a whole frame of image, so that even if the transmission network is congested, the receiving end can simultaneously receive all desktop image data, and the images on all screens of the large-screen device can be synchronously displayed.
Fig. 5 is a schematic diagram of a large screen system according to an embodiment of the present invention. As shown in fig. 5, the large screen system may include: image source equipment 51, image source equipment 52, image source equipment 53, image source equipment 54, a transmitting end S1 connected with the image source equipment 51, a transmitting end S2 connected with the image source equipment 52, a transmitting end S3 connected with the image source equipment 53, a transmitting end S4 connected with the image source equipment 54, an acquisition end server 55, a receiving end server 56, a receiving end R1, a receiving end R2, a receiving end R3, a receiving end R3 and large-screen equipment 57, wherein the large-screen equipment comprises 4 screens.
Optionally, the transmitting end (S1, S2, S3, S4) of this embodiment functions to acquire images of corresponding image source devices (image source device 51, image source device 52, image source device 53, image source device 54), and transmit the acquired images to the acquiring end server 55; the acquisition-side server 55 encodes the received data of the multiple frames of images as a whole frame of image to generate target encoded data, and labels the sub-encoded data corresponding to each image in the target encoded data according to the target identifier. The receiving end server 55 may split the target encoded data into sub-encoded data corresponding to each image according to the target, and then send the sub-encoded data to corresponding sending ends (R1, R2, R3, R4) for decoding, and display the decoded image on a corresponding screen in the large screen device 58.
Since the acquisition server 55 combines all the images together and performs encoding processing as a frame of image, the receiving server receives sub-encoded data of all the desktop image data regardless of transmission quality or bandwidth limitation.
That is, even in the case where the network transmission quality is very poor, the receiving-side server 55 receives either the sub-encoded data of all the images or the sub-encoded data of none of the images.
The above-described method of this embodiment is further described below.
S1, the sender (S1, S2, S3, S4) collects images of the image source devices (image source device 51, image source device 52, image source device 53, image source device 54), and sends the collected images to the collecting server 55.
It should be noted that the sending end in the present invention is only used for collecting images, and the collection end server 55 encodes multiple frames of images.
It should be further noted that the sending end may be internally installed in the image source device, or may be externally installed outside the image source device. The image source device may be a PC, notebook, tablet, camera, etc. The transmitting end can acquire images of image source equipment and can also encode the acquired images.
For example, fig. 5 includes an image source device 51, an image source device 52, an image source device 53, and an image source device 54, which correspond to 4 transmitting terminals S1 to S4, respectively, where the 4 transmitting terminals are used to perform image acquisition on 4 image source devices, respectively.
S2, the acquisition-side server 55 combines the data of each image into a whole frame of image data, and encodes the whole frame of image data to generate target encoded data, wherein in the target encoded data, the sub-encoded data corresponding to each image is provided with a target identifier; the acquisition end server 55 transmits the target encoded data to the reception end server 55.
In this step, first, the acquisition-side server 55 splices the received data of all the images into a whole frame of image data, where the splicing manner may be random, and the sub-frame image data of each image in the whole frame of image data is provided with a target identifier.
Optionally, the embodiment may have a plurality of methods for setting the target identifier, which are specifically described as follows:
as an alternative example, a target identifier may be set on the head data of each sub-frame image data, as shown in fig. 5, in the acquisition-side server 55, the frame image data corresponding to the large image in the black frame is composed of sub-frame image data corresponding to four small images, and the head data of the sub-frame image data corresponding to each small image is provided with the target identifier, for example, 1, 2, 3, 4, so as to distinguish the sub-frame image data corresponding to each small image. It should be noted that this method cannot disassemble the sub-frame image data of the same image.
As another alternative example, a target identifier may be set on the head data and the tail data of each sub-frame image data, as shown in fig. 6, where fig. 6 is a schematic diagram of setting a target identifier on the head data and the tail data of the sub-frame image data according to an embodiment of the present invention. As shown in fig. 6, a first sub-frame diagram data and a last sub-frame diagram data are provided with a target identifier 1, a second sub-frame diagram data and a last sub-frame diagram data are provided with a target identifier 2, a third sub-frame diagram data and a last sub-frame diagram data are provided with a target identifier 3, and a fourth sub-frame diagram data and a last sub-frame diagram data are provided with a target identifier 4. It should be noted that this method cannot disassemble the sub-frame image data of the same image.
As another alternative example, a mark symbol may be provided on each data of the sub-frame image data, as shown in fig. 7. Fig. 7 is a schematic diagram of setting a target identifier on each data of sub-frame image data according to an embodiment of the present invention. As shown in fig. 7, each data of the first sub-frame map data is provided with a target identifier 1, each data of the second sub-frame map data is provided with a target identifier 2, each data of the third sub-frame map data is provided with a target identifier 3, and each data of the fourth sub-frame map data is provided with a target identifier 4. It should be noted that the method can perform interpenetration splicing on the multiple sub-frame image data.
And then, the acquisition end server encodes the spliced whole frame of data, and in the encoding process, when a mark is encountered, a mark is correspondingly set in the encoded data, so that the large picture encoded data is finally generated.
It should be noted that, in the large image coded data, the small image coded data of each desktop image data can be intercepted by setting a mark in the data. The marking indicia may be an image ID.
Finally, the acquisition end server 55 transmits the target encoded data to the receiving end server 55.
S3, the receiving end server 55 splits the target coded data into sub-coded data corresponding to multiple frames of images according to the target identifier in the target coded data; and sending the sub-coded data to a sending end corresponding to the target identifier according to a preset distribution rule.
The allocation rule may include a correspondence between the target identifier and the sending end.
In this step, the receiving-end server 55 may split the target encoded data into sub-encoded data corresponding to multiple frames of images according to the target identifier in the target encoded data.
Corresponding to the manner of marking the sub-frame image data in S2, the processing manner of the sink server 55 may include the following.
As an alternative embodiment, in a scenario where the first data of the sub-frame image data (sub-coded data) is provided with a mark, in the target coded data, when one target identifier 1 is found, a data packet is newly created, the data where the target identifier 1 is located is placed, and subsequent data is placed in the data packet until another target identifier is found, and the data packet is marked as 1.
As another alternative, in the case that the first data and the last data of the sub-frame image data (sub-coded data) are both provided with the target identifier, when a new target identifier 1 is found, a data packet is newly created, the data where the target identifier 1 is located, and the following data are put into the data packet; when the target identifier 1 is found again, it indicates that the current data is the last one of the current sub-coded data, puts the data where the target identifier 1 is located into a data packet, marks the data packet as 1, and prepares to create a new data packet.
As another optional implementation, in a scene where all data of sub-frame image data (sub-coded data) is provided with a target identifier, when a target identifier 1 is found, it is determined whether a data packet of the target identifier 1 already exists currently, if so, the data where the target identifier 1 exists is put into the data packet, if not, a data packet is newly created, the data where the mark 1 exists is put into the newly created data packet, and the data packet is marked as 1, where each data packet is each sub-coded data.
Then, according to a preset allocation rule, the embodiment transmits the split sub-coded data to a receiving end corresponding to the target identifier.
Optionally, the allocation rule in this embodiment includes a correspondence between the target identifier and the sending end, and the user may set the allocation rule in advance according to a requirement. For example, as shown in fig. 5, the allocation rule may include: the target identifier 1 corresponds to R2, the target identifier 2 corresponds to R1, the target identifier 3 corresponds to R3, and the target identifier 4 corresponds to R4. In this way, the sink server 55 transmits the sub-encoded data of the target identifier 1 to R2, the sub-encoded data of the target identifier 2 to R1, the sub-encoded data of the target identifier 3 to R3, and the sub-encoded data of the target identifier 4 to R4.
And S4, the receiving end decodes the received coded data and sends the image generated by decoding to a connected screen for displaying.
In this step, the receiving end decodes the received encoded data and sends the image generated by decoding to the connected screen for display.
It can be understood that each receiving end of this embodiment receives sub-coded data belonging to the same image, and the receiving end can obtain an image after decoding, and then send the image to the screen of the large-screen device 57 connected to the receiving end for displaying.
The image processing method of the embodiment is a large-screen image processing method for image asynchronization, and since all images are encoded to obtain target encoded data, no matter how the network transmission quality is or the bandwidth is limited, the target encoded data of all images are received by a receiving end server, each target identifier in the target encoded data is used for enabling the receiving end server to split corresponding sub-encoded data from the target encoded data, each sub-encoded data is sent to a corresponding receiving end to be decoded, and each image obtained by decoding is synchronously displayed by a display device connected with the receiving end. That is to say, under the condition that the network transmission quality is very poor, the receiving end server may receive the sub-coded data of all the images, or may not receive the sub-coded data of any image, so that the technical problem of multi-image non-synchronization caused by time delay or other transmission problems can be solved, thereby achieving the technical effect of multi-frame image synchronization.
It should be noted that, in a scenario where the encoding mode is progressive, if the transmission quality is poor, the receiving end server may receive encoded data of part of layers of all desktop image data, and all screens of the large-screen device may display a blurry picture. In this way, the problem of multi-map asynchronism due to time delay or other transmission problems can be avoided.
Example 4
The embodiment of the invention also provides an image processing device. It should be noted that the image processing apparatus of this embodiment can be used to execute the image processing method shown in fig. 2 in embodiment 2 of the present invention.
Fig. 8 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention. As shown in fig. 8, the image processing apparatus 80 may include: an acquisition unit 81, an encoding unit 82, and a transmission unit 83.
The obtaining unit 81 is configured to obtain one frame of image from each sending end, so as to obtain multiple frames of images, where the image obtained from each sending end is collected from an image source device connected to the sending end.
And the encoding unit 82 is configured to encode multiple frames of images to obtain target encoded data, where the target encoded data includes multiple sub-encoded data in one-to-one correspondence with the multiple frames of images, and the sub-encoded data is provided with a target identifier.
And a sending unit 83, configured to send the target encoded data to the receiving end server, where the target identifier is used to enable the receiving end server to split corresponding sub-encoded data from the target encoded data, and send the sub-encoded data to the corresponding receiving end for decoding, and a decoded image is displayed by a display device connected to the receiving end.
The embodiment of the invention also provides another image processing device. It should be noted that the image processing apparatus of this embodiment can be used to execute the image processing method shown in fig. 3 in embodiment 2 of the present invention.
Fig. 9 is a schematic diagram of another image processing apparatus according to an embodiment of the present invention. As shown in fig. 9, the image processing apparatus 90 may include: a receiving unit 91, a splitting unit 92 and a first sending unit 93.
The receiving unit 91 is configured to receive target encoded data, where the target encoded data are obtained by encoding a multi-frame image by an acquisition end server and include a plurality of sub-encoded data corresponding to the multi-frame image one-to-one, the multi-frame image is obtained by the acquisition end server from each transmitting end, the image obtained from each transmitting end is acquired from an image source device connected to the transmitting end, and the sub-encoded data are provided with a target identifier.
And the splitting unit 92 is configured to split corresponding sub-coded data from the target coded data based on the target identifier.
And a first sending unit 93, configured to send the sub-coded data to a corresponding receiving end for decoding, so as to obtain an image, where the image is displayed by a display device connected to the receiving end.
In the image processing apparatus of this embodiment, since all images are encoded to obtain target encoded data, no matter how the network transmission quality is or the bandwidth is limited, the receiving end server receives the target encoded data of all images, each target identifier in the target encoded data is used for enabling the receiving end server to split corresponding sub-encoded data from the target encoded data, and each sub-encoded data is sent to a corresponding receiving end for decoding, each image obtained by decoding is synchronously displayed by a display device connected to the receiving end, so that the receiving end server can receive the sub-encoded data of all images under the condition that the network transmission quality is very poor, thereby avoiding the problem of multi-image asynchronization caused by time delay or other transmission problems, and solving the technical problem that multi-frame images are difficult to be synchronously displayed, the technical effect of synchronous display of the multi-frame images is achieved.
Example 4
According to an embodiment of the present invention, there is also provided a computer-readable storage medium. The computer readable storage medium includes a stored program, wherein the apparatus in which the computer readable storage medium is located is controlled to execute the image processing method according to the embodiment of the present invention when the program runs.
Example 5
According to an embodiment of the present invention, there is also provided a processor configured to execute a program, where the program executes an image processing method according to an embodiment of the present invention.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or models, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (12)

1. An image processing method, comprising:
respectively acquiring a frame of image from each transmitting end so as to obtain a plurality of frames of images, wherein the images acquired from each transmitting end are acquired from image source equipment connected with the transmitting end;
coding the multi-frame images to obtain target coded data, wherein the target coded data comprise a plurality of sub-coded data which correspond to the multi-frame images one by one, and the sub-coded data are provided with target identifiers;
and sending the target coded data to a receiving end server, wherein the target identification is used for enabling the receiving end server to split the corresponding sub-coded data from the target coded data, sending the sub-coded data to the corresponding receiving end for decoding, and displaying the image obtained by decoding by a display device connected with the receiving end.
2. The method according to claim 1, wherein encoding the plurality of frames of images to obtain target encoded data comprises:
converting the multi-frame image into frame image data;
and coding the frame image data to obtain the target coded data.
3. The method according to claim 2, wherein after converting the plurality of frame images into frame image data, the method further comprises:
setting the target identifier corresponding to each of the sub-coded data in the frame image data.
4. The method according to claim 3, wherein the setting of the target identifier corresponding to each sub-coded data in the frame image data comprises at least one of:
adding a target identifier of each subframe image data into the header information of the frame image data, wherein the target identifiers are arranged in the header information according to the arrangement sequence of the subframe image data, meanwhile, a start marker of the current subframe image data is added to the header information of each subframe image data, and an end marker is set on the tail data of each subframe image data;
adding a target mark of each subframe image data and data length information of each subframe image data into header information of the frame image data;
setting the corresponding target mark on the first data of the sub-frame image data;
respectively setting corresponding target marks on the head data and the tail data of the sub-frame image data;
and setting the corresponding target identification on each data of the sub-frame image data.
5. The method of claim 2, wherein encoding the frame image data to obtain the target encoded data comprises:
coding each subframe image data in the frame image data to obtain a plurality of sub-coded data;
and generating the target coded data by using the plurality of sub-coded data respectively provided with the target identifier.
6. An image processing method, comprising:
receiving target coded data, wherein the target coded data are obtained by coding a multi-frame image by an acquisition end server and comprise a plurality of sub-coded data which correspond to the multi-frame image one by one, the multi-frame image is obtained by the acquisition end server from each transmitting end, the image obtained from each transmitting end is acquired from an image source device connected with the transmitting end, and the sub-coded data are provided with target identifiers;
splitting the corresponding sub-coded data from the target coded data based on the target identification;
and sending the sub-coded data to a corresponding receiving end for decoding to obtain the image, wherein the image is displayed by a display device connected with the receiving end.
7. The method of claim 6, wherein before sending the sub-coded data to a corresponding receiving end for decoding, the method further comprises:
and determining the receiving end corresponding to the target identification.
8. The method of claim 6, wherein splitting the corresponding sub-coded data from the target-coded data based on the target identification comprises:
determining the position of the target identification in the target coded data;
storing data associated with the position in the target encoding data into a data packet;
determining the data packet as the sub-coded data.
9. An image processing system, comprising:
the image source devices are respectively used for generating images to obtain multi-frame images;
the transmitting ends are connected with the image source devices in a one-to-one correspondence mode and used for respectively acquiring the multi-frame images;
the acquisition end server is connected with the plurality of sending ends and is used for coding the multi-frame images to obtain target coded data, wherein the target coded data comprise a plurality of sub-coded data which correspond to the multi-frame images one by one, and the sub-coded data are provided with target identifications;
the receiving end server is connected with the acquisition end server and is used for splitting the corresponding sub-coded data from the target coded data based on the target identification;
the receiving terminals are connected with the receiving terminal server, wherein each receiving terminal is used for decoding the corresponding sub-coded data to obtain the image;
and the display equipment is connected with each receiving end and used for displaying the image.
10. An image processing apparatus characterized by comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for respectively acquiring a frame of image from each transmitting end so as to obtain a plurality of frames of images, and the images acquired from each transmitting end are acquired from image source equipment connected with the transmitting end;
the encoding unit is used for encoding the multi-frame images to obtain target encoded data, wherein the target encoded data comprise a plurality of sub-encoded data which correspond to the multi-frame images one by one, and the sub-encoded data are provided with target identifiers;
and the sending unit is used for sending the target coded data to a receiving end server, wherein the target identifier is used for enabling the receiving end server to split the corresponding sub-coded data from the target coded data, the sub-coded data are sent to the corresponding receiving end to be decoded, and the image obtained by decoding is displayed by a display device connected with the receiving end.
11. A computer-readable storage medium, comprising a stored program, wherein the program, when executed, controls an apparatus in which the computer-readable storage medium is located to perform the method of any one of claims 1 to 8.
12. A processor, characterized in that the processor is configured to run a program, wherein the program when running performs the method of any of claims 1 to 8.
CN202010904660.8A 2020-09-01 2020-09-01 Image processing method, system, device, storage medium and processor Pending CN112019853A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010904660.8A CN112019853A (en) 2020-09-01 2020-09-01 Image processing method, system, device, storage medium and processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010904660.8A CN112019853A (en) 2020-09-01 2020-09-01 Image processing method, system, device, storage medium and processor

Publications (1)

Publication Number Publication Date
CN112019853A true CN112019853A (en) 2020-12-01

Family

ID=73516646

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010904660.8A Pending CN112019853A (en) 2020-09-01 2020-09-01 Image processing method, system, device, storage medium and processor

Country Status (1)

Country Link
CN (1) CN112019853A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114630007A (en) * 2020-12-11 2022-06-14 华为技术有限公司 Display synchronization method, electronic device and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1949166A (en) * 2006-11-09 2007-04-18 上海大学 Free multi visul point polyprojecting 3D displaying system and method
CN107197120A (en) * 2017-05-27 2017-09-22 电子科技大学 Image source compatibility testing method and system
CN108471513A (en) * 2018-03-28 2018-08-31 国网辽宁省电力有限公司信息通信分公司 Video fusion method, apparatus and server
CN111447339A (en) * 2020-03-26 2020-07-24 西安万像电子科技有限公司 Image transmission method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1949166A (en) * 2006-11-09 2007-04-18 上海大学 Free multi visul point polyprojecting 3D displaying system and method
CN107197120A (en) * 2017-05-27 2017-09-22 电子科技大学 Image source compatibility testing method and system
CN108471513A (en) * 2018-03-28 2018-08-31 国网辽宁省电力有限公司信息通信分公司 Video fusion method, apparatus and server
CN111447339A (en) * 2020-03-26 2020-07-24 西安万像电子科技有限公司 Image transmission method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114630007A (en) * 2020-12-11 2022-06-14 华为技术有限公司 Display synchronization method, electronic device and readable storage medium
CN114630007B (en) * 2020-12-11 2024-04-26 华为技术有限公司 Display synchronization method, electronic device and readable storage medium

Similar Documents

Publication Publication Date Title
CN107302711B (en) Processing system of media resource
RU2015105986A (en) SENDING DEVICE, TRANSMISSION METHOD, RECEIVING DEVICE AND RECEIVING METHOD
CN107888567B (en) Transmission method and device for composite multimedia signal
CN109104243B (en) Pixel communication method, information sending terminal and information receiving terminal
CN107105048B (en) Teaching control method and system based on cloud technology
CN103986960A (en) Method for single-video picture division route teletransmission precise synchronization tiled display
CN107690074A (en) Video coding and restoring method, audio/video player system and relevant device
CN101867796A (en) Method and device for monitoring video
US8023560B2 (en) Apparatus and method for processing 3d video based on MPEG-4 object descriptor information
CN105898506A (en) Method and system for multi-screen playing of media files
CN103647870A (en) Terminal and terminal expression display method
CN110636334B (en) Data transmission method and system
CN110830758A (en) Video polling method, device, video server and storage medium
CN112019853A (en) Image processing method, system, device, storage medium and processor
CN107911668A (en) Wireless image transmission system and method
CN110072144B (en) Image splicing processing method, device and equipment and computer storage medium
CN111638861B (en) Splicing wall signal synchronization method and device
CN101437159B (en) Method and apparatus for sending digital image
CN115442572A (en) Data transmission method and device
CN103248945B (en) The method and system of image transmitting
CN111711791A (en) Data processing method and device
CN114466224B (en) Video data encoding and decoding method and device, storage medium and electronic equipment
CN114501051B (en) Method and device for displaying marks of live objects, storage medium and electronic equipment
CN112202985B (en) Information processing method, client device, server device and information processing system
CN105245511B (en) A kind of information transferring method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination