CN113163214A - Video processing method and device - Google Patents

Video processing method and device Download PDF

Info

Publication number
CN113163214A
CN113163214A CN202010076016.6A CN202010076016A CN113163214A CN 113163214 A CN113163214 A CN 113163214A CN 202010076016 A CN202010076016 A CN 202010076016A CN 113163214 A CN113163214 A CN 113163214A
Authority
CN
China
Prior art keywords
video
video stream
displayed
processed
streams
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010076016.6A
Other languages
Chinese (zh)
Inventor
郑洛
王志兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010076016.6A priority Critical patent/CN113163214A/en
Priority to PCT/CN2021/071220 priority patent/WO2021147702A1/en
Publication of CN113163214A publication Critical patent/CN113163214A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234363Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

The embodiment of the application discloses a video processing method and a device thereof, wherein the method is applied to a first video processing device and comprises the following steps: acquiring video layout parameters of a terminal, wherein the video layout parameters are used for indicating identification information of at least two paths of video streams to be displayed required by the terminal and the resolution of each path of video stream to be displayed; acquiring the at least two paths of video streams to be displayed according to the video layout parameters; and synthesizing the at least two video streams to be displayed into a target video stream, and displaying the target video stream on the terminal. By implementing the embodiment of the application, the target video stream synthesized by at least two video streams to be displayed can be displayed in the terminal.

Description

Video processing method and device
Technical Field
The present application relates to the field of multimedia technologies, and in particular, to a video processing method and apparatus.
Background
With the development of multimedia technology, cameras placed at different positions or angles can be used for shooting the scene, namely multi-camera shooting. The scene situation can be more comprehensively and clearly known through multi-station shooting.
Currently, a director selects one picture from pictures shot by different positions and pushes the picture to a terminal for displaying, so that the terminal cannot display a plurality of pictures at the same time.
Disclosure of Invention
The embodiment of the application provides a video processing method and a video processing device, which are beneficial to displaying a target video stream synthesized by at least two video streams to be displayed in a terminal, and a picture displayed in the terminal is formed by splicing at least two sub-pictures when the target video stream is displayed.
In a first aspect, an embodiment of the present application provides a video processing method, where the method is applied to a first video processing apparatus, and the method includes: acquiring video layout parameters of a terminal, wherein the video layout parameters are used for indicating identification information of at least two paths of video streams to be displayed required by the terminal and the resolution of each path of video stream to be displayed; acquiring the at least two paths of video streams to be displayed according to the video layout parameters; and synthesizing the at least two video streams to be displayed into a target video stream, and displaying the target video stream on the terminal.
In the technical scheme, the terminal is favorable for displaying the target video stream synthesized by at least two paths of video streams to be displayed, and the picture presented in the terminal is formed by splicing at least two sub-pictures when the target video stream is displayed. On the other hand, because the target video stream is a video stream, the terminal only needs to perform the decapsulation operation once and only needs one video player, so that the purpose of displaying a plurality of sub-pictures can be achieved.
In an implementation manner, a specific implementation manner of obtaining the video layout parameter of the terminal may be: and receiving a video stream composition request sent by the terminal, wherein the video stream composition request comprises the video layout parameters of the terminal.
In the technical scheme, the terminal sends the video layout parameters to the first video processing device, so that the identification information or resolution of the video stream to be displayed, which is required to be displayed by the terminal, can be changed.
In an implementation manner, a specific implementation manner of obtaining the at least two video streams to be displayed according to the video layout parameter may be: sending a video stream acquisition request to a first service device, wherein the video stream acquisition request comprises the identification information of the at least two paths of video streams to be displayed and the resolution of each path of video stream to be displayed; and receiving the at least two video streams to be displayed returned by the first service equipment.
In one implementation, the number of the first service devices may be multiple, and different video streams to be displayed may be from different first service devices.
In this technical solution, when the number of the first service devices is multiple, the different video streams to be displayed acquired by the first video processing device may be from different first service devices. By the method, different video streams to be displayed can be acquired from different first service devices in parallel, so that the acquisition efficiency of at least two paths of video streams to be displayed which need to be displayed is improved.
In an implementation manner, a specific implementation manner of obtaining the at least two video streams to be displayed according to the video layout parameter may be: aiming at the identification information of each path of video stream to be displayed in the at least two paths of video streams to be displayed, acquiring a plurality of paths of processed video streams corresponding to the identification information of the video stream to be displayed, wherein the resolutions of the plurality of paths of processed video streams are different from each other, and each path of processed video stream in the plurality of paths of processed video streams has the same image content as the video stream to be displayed; and taking the processed video stream with the resolution same as that of the video stream to be displayed in the multi-path processed video stream as the video stream to be displayed.
In one implementation, the multiple processed video streams may be stored in a local database.
In an implementation manner, a specific implementation manner of obtaining the at least two video streams to be displayed according to the video layout parameter may be: sending a sub-video stream acquisition request to second service equipment aiming at the identification information of each path of video stream to be displayed in the at least two paths of video streams to be displayed, wherein the sub-video stream acquisition request comprises the identification information and the resolution of the video stream to be displayed; receiving a plurality of paths of sub-video streams which are returned by the second service equipment and correspond to the identification information and the resolution of the video stream to be displayed; and synthesizing the plurality of paths of sub-video streams into the video stream to be displayed.
In the technical scheme, when the to-be-displayed video stream required to be displayed by the terminal is composed of multiple paths of sub-video streams, the multiple paths of sub-video streams are subjected to synthesis processing, so that a complete to-be-displayed video stream can be synthesized. And then synthesizing the synthesized at least two video streams to be displayed into a target video stream which is expected to be displayed on the terminal by the user, wherein the picture presented in the terminal is formed by splicing at least two sub-pictures when the target video stream is displayed. In this way, the user is enabled to view at least two sub-pictures simultaneously in the terminal.
In an implementation manner, a specific implementation manner of obtaining the at least two video streams to be displayed according to the video layout parameter may be: aiming at the identification information of each path of video stream to be displayed in the at least two paths of video streams to be displayed, sending an index acquisition request carrying the identification information to third service equipment, and receiving an index of a plurality of paths of processed video streams corresponding to the identification information and the resolution of each path of processed video streams, which are returned by the third service equipment; determining a target index from the indexes of the multi-path processing video stream, wherein the resolution of the processing video stream corresponding to the target index is the same as the resolution of the video stream to be displayed; and sending a stream acquisition request carrying the target index to the third service equipment, receiving a processing video stream corresponding to the target index returned by the third service equipment, and taking the processing video stream corresponding to the target index as the video stream to be displayed.
In the technical scheme, compared with a mode of acquiring multiple paths of processed video streams corresponding to the identification information of the video stream to be displayed and determining the video stream to be displayed, the video stream to be displayed is determined by acquiring the indexes of the multiple paths of processed video streams and the resolution of the processed video stream corresponding to each index, so that the data volume transmitted between the first video processing device and the third service device can be reduced.
In one implementation, each path of video stream to be displayed includes multiple frames of images, and each frame of image carries a playing time; the specific implementation of synthesizing the at least two video streams to be displayed into one target video stream may be as follows: and synthesizing the images with the same playing time in the at least two paths of video streams to be displayed into a frame of target image, wherein all the target images form a path of target video stream.
In the technical scheme, the images with the same playing time are the images acquired at the same time. In this way, it is ensured that the frame images constituting the target image are captured at the same time, that is, it is ensured that when the target video stream is displayed in the terminal, the sub-pictures simultaneously displayed in the terminal are pictures at the same time.
In one implementation manner, the at least two to-be-displayed video streams may include a first to-be-displayed video stream and a second to-be-displayed video stream; if the resolution of the first to-be-displayed video stream is higher than the resolution of the second to-be-displayed video stream, the display area occupied by the first to-be-displayed video stream in the terminal may be larger than the display area occupied by the second to-be-displayed video stream in the terminal.
In the technical scheme, the resolution of the video stream with larger display area occupied in the terminal can be higher, namely, the video stream with larger display area occupied in the terminal is clearer.
In a second aspect, an embodiment of the present application provides another video processing method, which is applied in a second video processing apparatus, and the method includes: determining at least two resolutions, and acquiring a video stream to be processed; carrying out resolution adjustment on the video stream to be processed to obtain at least two paths of processed video streams; the resolution of each of the at least two processed video streams is the same as one of the at least two resolutions, and each of the at least two processed video streams has the same image content as the to-be-processed video stream.
In the technical scheme, the resolution of the video stream to be processed is adjusted to obtain at least two paths of processed video streams with different resolutions and the same image content, so that the resolution requirements of the terminal on the displayed video streams are better met.
In one implementation, after obtaining the at least two processed video streams, the method may further include: and sending the at least two processed video streams to one or more first service devices, wherein at least one processed video stream exists in each first service device.
In one implementation, the aforementioned at least two resolutions are preset.
In one implementation, the specific implementation of determining at least two resolutions may be: and receiving a first instruction sent by the service equipment, wherein the first instruction is used for indicating the at least two resolutions.
In one implementation mode, the number of the video streams to be processed is at least two, each video stream to be processed comprises a plurality of frames of images, and each frame of image carries acquisition time; the method may further comprise: and synchronously processing at least two frames of images with the acquisition time in the same synchronous window in the at least two paths of video streams to be processed, wherein the at least two frames of images with the acquisition time in the same synchronous window after the synchronous processing all carry the same playing time.
In the technical scheme, at least two frames of images with the acquisition time in the same synchronization window in the at least two paths of video streams to be processed are synchronously processed, that is, at least two frames of images acquired at the same time are actually synchronously processed, so that the at least two frames of images in the same synchronization window carry the same playing time, and the terminal is favorable for displaying the at least two frames of images acquired at the same time.
In one implementation, the method may further include: acquiring video division information corresponding to each of the at least two resolutions; and aiming at each path of processing video stream in the at least two paths of processing video streams, dividing the processing video stream into a plurality of paths of sub-video streams according to the video dividing information corresponding to the resolution of the processing video stream.
In the technical scheme, the processing video stream is divided into a plurality of paths of sub-video streams, so that one path of complete processing video stream is composed of a plurality of sub-video streams, and different sub-video streams composing the same processing video stream can be sent to a plurality of second service devices. In this way, when the terminal needs to display the processing video stream, different sub-video streams constituting the processing video stream can be acquired from different second service devices in parallel, thereby being beneficial to improving the acquisition efficiency of the processing video stream.
In one implementation, after dividing the processed video stream into multiple sub-video streams, the method may further include: and sending the plurality of paths of sub-video streams to one or more second service devices, wherein at least one path of sub-video stream exists in each second service device.
In a third aspect, an embodiment of the present application provides a first video processing apparatus, which is a first video processing device or an apparatus (e.g., a chip) having a function of the first video processing device. The apparatus has a function of implementing the video processing method provided by the first aspect, and the function is implemented by hardware or by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the functions described above.
In a fourth aspect, an embodiment of the present application provides a second video processing apparatus, which is a second video processing device or an apparatus (e.g., a chip) having a function of the second video processing device. The apparatus has a function of implementing the video processing method provided by the second aspect, and the function is implemented by hardware or by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the functions described above.
In a fifth aspect, the present application provides another first video processing apparatus, which is a first video processing device or an apparatus (e.g., a chip) having the function of the first video processing device. The apparatus includes a processor and a storage medium, where instructions are stored in the storage medium, and when the instructions are executed by the processor, the apparatus is enabled to implement the video processing method provided by the first aspect.
In a sixth aspect, the present application provides another second video processing apparatus, which is a second video processing device or an apparatus (e.g. a chip) having the function of a second video processing device, and includes a processor and a storage medium, where instructions are stored in the storage medium, and when executed by the processor, the instructions cause the apparatus to implement the video processing method provided in the second aspect.
In a seventh aspect, an embodiment of the present application provides a video processing system, where the video processing system includes the first video processing apparatus described in the third aspect and the second video processing apparatus described in the fourth aspect, or the video processing system includes the first video processing apparatus described in the fifth aspect and the second video processing apparatus described in the sixth aspect.
In an eighth aspect, an embodiment of the present application provides a computer-readable storage medium for storing computer program instructions used by the first video processing apparatus described in the third aspect, which includes a program for executing the method of the first aspect.
In a ninth aspect, an embodiment of the present application provides a computer-readable storage medium for storing computer program instructions for use by the second video processing apparatus described in the fourth aspect, which includes a program for performing the method of the second aspect.
In a tenth aspect, an embodiment of the present application provides a computer program product, where the program product includes a program, and when the program is executed by a first video processing apparatus, the apparatus is caused to implement the method described in the first aspect.
In an eleventh aspect, the present application provides a computer program product, which includes a program that, when executed by a second video processing apparatus, causes the apparatus to implement the method described in the second aspect.
Drawings
Fig. 1 is a schematic diagram of an architecture of a video processing system according to an embodiment of the present application;
fig. 2a is a schematic flow chart of a video processing method disclosed in the embodiment of the present application;
fig. 2b is a schematic view of a scene for performing resolution adjustment on an image to be processed in a video stream to be processed according to an embodiment of the present application;
fig. 3a is a schematic flow chart of another video processing method disclosed in the embodiments of the present application;
FIG. 3b is a schematic view of a scene for performing synchronous processing on image 1, image 2 and image 3 according to an embodiment of the present disclosure;
fig. 4a is a schematic flow chart of another video processing method disclosed in the embodiments of the present application;
FIG. 4b is a schematic diagram of a scene partitioning a processed video stream according to an embodiment of the present disclosure;
fig. 5a is a schematic flow chart of another video processing method disclosed in the embodiments of the present application;
fig. 5b is a schematic view of a scene in which a video stream to be displayed 1, a video stream to be displayed 2, and a video stream to be displayed 3 are combined into one target video stream, according to an embodiment of the present application;
fig. 6 is a schematic flow chart of another video processing method disclosed in the embodiments of the present application;
fig. 7 is a schematic structural diagram of a first video processing apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of another first video processing apparatus disclosed in an embodiment of the present application;
fig. 9 is a schematic structural diagram of a second video processing apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of another second video processing apparatus according to an embodiment of the present application.
Detailed Description
For ease of understanding, terms referred to in the present application will be first introduced.
Resolution ratio: also known as resolution, resolution can be subdivided into display resolution, image resolution, print resolution, and scan resolution, among others.
The display resolution (also called screen resolution) refers to how many pixels can be displayed by the display. Under the condition of a certain display resolution, the smaller the display screen is, the clearer the image is, on the contrary, when the size of the display screen is fixed, the clearer the image is when the display resolution is higher. Image resolution may refer to the number of pixel points contained in a unit of inch. The resolution mentioned in the embodiments of the present application may refer to an image resolution.
The resolution may be expressed in terms of the number of pixels in each direction. For example, the resolution of image 1 is 640x480 to represent: there are 640 pixels in the width direction of image 1 and 480 pixels in the height direction of image 1. Alternatively, the resolution may be expressed in pixels per inch (ppi) and the width and height of the image. For example, the resolution of image 2 is 72ppi and 8x6 inches for representation: image 2 is 8 inches wide and 6 inches high and includes 72 pixels per inch. It should be noted that the form adopted by the resolution is not limited in the embodiments of the present application.
In order to better understand a video processing method disclosed in the embodiments of the present application, a video processing system to which the embodiments of the present application are applicable is first described below.
Referring to fig. 1, fig. 1 is a schematic diagram of an architecture of a video processing system according to an embodiment of the present disclosure. As shown in fig. 1, the video processing system includes: a plurality of video capture devices 101, a second video processing device 102, a service device 103, a first video processing device 104, and a terminal device 105.
Each video capture device 101 may be configured to capture a video stream to be processed, and send the captured video stream to be processed to the second video processing device 102. It should be noted that the to-be-processed video streams collected by different video capture devices 101 are different, and as shown in fig. 1, a to-be-processed video stream 1 collected by one video capture device 101 is different from a to-be-processed video stream 2 collected by another video capture device 101. The difference in the video streams to be processed may refer to the difference in image content included in the video streams to be processed. It is understood that the video stream to be processed received by the second video processing device 102 may be a video stream suitable for network transmission after being encoded by the video capture device 101.
The second video processing device 102 may be configured to obtain at least two resolutions, and perform resolution adjustment on each (decoded) video stream to be processed according to the at least two resolutions. After the resolution of each path of video stream to be processed is adjusted, at least two paths of processed video streams with different resolutions can be obtained, and each path of processed video stream and the video stream to be processed have the same image content.
The number of the processed video streams obtained after the resolution adjustment is performed on each path of the video stream to be processed may be the same as the number of the types of the at least two resolutions, and the resolution of each path of the processed video stream in the obtained processed video streams may be the same as one of the at least two resolutions.
After obtaining the at least two processed video streams corresponding to each to-be-processed video stream, the second video processing device 102 may send the at least two processed video streams corresponding to each to-be-processed video stream to the service device 103.
It should be noted that the video processing method disclosed in the embodiment of the present application may be applied to a live scene or a non-live scene, and the service device 103 in fig. 1 may be a storage device or a distribution device. When applied to a live scene, the service device in fig. 1 may be a distribution device, and the distribution device may be configured to receive at least two processed video streams corresponding to each to-be-processed video stream. When applied to a non-live scene, the service device 103 in fig. 1 may be a storage device, and the storage device may be configured to store the identification information of each to-be-processed video stream in association with at least two processed video streams corresponding to the to-be-processed video stream.
In the embodiment of the present application, the terminal device 105 can simultaneously display multiple video streams in its display device. When the user wishes to view a plurality of sub-screens at the same time, the terminal device 105 may be triggered to generate a video stream composition request by a user operation. The video stream composition request may include video layout parameters of the terminal device 105, and the video layout parameters may be used to indicate identification information of at least two video streams to be displayed that the terminal device 105 needs to display and a resolution of each video stream to be displayed.
After the terminal apparatus 105 generates the video stream composition request, the video stream composition request may be transmitted to the first video processing apparatus 104. After receiving the video stream composition request, the first video processing device 104 may send a video stream acquisition request to the service device 103 to request to acquire at least two video streams to be displayed, which are required to be displayed by the terminal device 105.
When the service device 103 is a distribution device, the distribution device may include a central distribution device and a plurality of edge distribution devices, and the central distribution device may be configured to receive at least two processed video streams corresponding to each to-be-processed video stream sent by the second video processing device 102, and send the at least two processed video streams corresponding to each to-be-processed video stream to each edge distribution device. The edge distribution device may be configured to respond to the video stream acquisition request sent by the first video processing device 104 in close proximity. Specifically, the central distribution device may be an origin server in a Content Delivery Network (CDN), and the edge distribution device may be a cache server in the CDN.
After receiving the at least two video streams to be displayed returned by the service device 103, the first video processing device 104 may combine the at least two video streams to be displayed into one target video stream, and send the target video stream to the terminal device 105, so as to display the target video stream on the terminal device 105. It is understood that the target video stream displayed on the terminal device 105 is composed of at least two video streams to be displayed, and the picture presented by the terminal device 105 when displaying the target video stream is formed by splicing at least two sub-pictures. Therefore, the user can view a plurality of sub-screens in the terminal device 105 at the same time.
The video capture device 101 may be an entity with a video capture function, such as a camera, a video camera, a scanner, or other devices (mobile phones, tablet computers, etc.) with a video capture function. The display device may be a display screen having an image output function. It should be noted that, in this embodiment of the application, when the terminal device displays a target video stream synthesized from at least two to-be-displayed video streams, it may also output an audio corresponding to each to-be-displayed video stream. In this case, the video processing system shown in fig. 1 may further include sound collection devices corresponding to the respective video collection devices. The second video processing device 102 and the first video processing device 104 may each be comprised of a processor, memory, and a network interface. In particular, the second video processing device 102 and the first video processing device 104 may both be servers.
The terminal device 105 may be an entity on the user side, such as a mobile phone, for receiving or transmitting signals. A terminal device may also be referred to as a terminal (terminal), a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), etc. The terminal device may be a mobile phone (mobile phone), a smart tv, a wearable device, a tablet computer (Pad), a computer with a wireless transceiving function, a Virtual Reality (VR) terminal device, an Augmented Reality (AR) terminal device, a wireless terminal in industrial control (industrial control), a wireless terminal in self-driving (self-driving), a wireless terminal in remote surgery (remote medical supply), a wireless terminal in smart grid (smart grid), a wireless terminal in transportation safety (transportation safety), a wireless terminal in smart city (smart city), a wireless terminal in smart home (smart home), and so on. The embodiment of the present application does not limit the specific technology and the specific device form adopted by the terminal device.
It should be noted that, in fig. 1, the second video processing device 102 and the first video processing device 104 are both taken as independent devices for example only, and do not constitute a limitation to the embodiments of the present application. In one implementation, the second video processing device 102 may be integrated in the video capture device 101 or in the service device 103. The first video processing device 104 may be integrated in the terminal device 105 or in the service device 103. In other words, the steps performed by the second video processing device 102 may be performed instead by the video capture device 101 or the service device 103, and the steps performed by the first video processing device 104 may be performed instead by the terminal device 105 or the service device 103.
It should be further noted that the video processing system shown in fig. 1 includes 2 video capture devices 101 for example only, and does not constitute a limitation on the embodiment of the present application. In other possible implementations, the video processing system may include more than 2 video capture devices.
It should be understood that the communication system described in the embodiment of the present application is for more clearly illustrating the technical solution of the embodiment of the present application, and does not constitute a limitation to the technical solution provided in the embodiment of the present application, and as a person skilled in the art knows that along with the evolution of the system architecture and the appearance of a new service scenario, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems.
The video processing method and the video processing apparatus provided in the present application are described in detail below with reference to the accompanying drawings.
Referring to fig. 2a, fig. 2a is a schematic flowchart of a video processing method according to an embodiment of the present disclosure. The method describes in detail how to adjust the resolution of the video stream to be processed to obtain at least two processed video streams with different resolutions and the same image content. The execution subject of steps S201 to S203 is the second video processing device or a chip in the second video processing device, and the second video processing device is taken as the execution subject of the video processing method as an example to be described below. As shown in fig. 2a, the method may include, but is not limited to, the following steps:
step S201: the second video processing device determines at least two resolutions.
The at least two resolutions determined by the second video processing device may be resolutions supported by the terminal when displaying the video stream, or may be resolutions desired by the user when displaying the video stream in the terminal. The at least two resolutions determined by the second video processing device may be different from each other.
In one implementation, the aforementioned at least two resolutions may be preset. Specifically, the second video processing apparatus may set the aforementioned at least two resolutions in advance according to a user operation.
In one implementation, the second video processing device may receive a first instruction sent by the service device, where the first instruction may be used to indicate the at least two resolutions. In this embodiment of the application, the first video processing device sends a video stream acquisition request to the service device to request to acquire at least two to-be-displayed video streams that the terminal device needs to display, where the video stream acquisition request may include a resolution of each to-be-displayed video stream in the at least two to-be-displayed video streams that the terminal device needs to display. After receiving the video stream acquisition request from the first video processing device, the service device may send the first instruction to the second video processing device if it is determined that the resolution in the video stream acquisition request is different from the resolution in the video stream acquisition request (from the first video processing device) received last time.
In one implementation, the service device may receive video stream acquisition requests sent by a plurality of first video processing devices, and the service device may send the first instruction to the second video processing device when the resolution in most of all the received video stream acquisition requests changes.
In the embodiment of the present application, the first video processing device sends the video stream acquisition request to the service device after receiving the video stream composition request from the terminal device. In an implementation manner, both the video stream composition request and the video stream acquisition request may include an identifier of the terminal device, and if the identifier of the terminal device is a preset device identifier, the service device may send the first instruction to the second video processing device. The preset device identifier may be an identifier of a preset terminal device having a resolution regulation authority. The service equipment sends the first instruction to the second video processing equipment only when the identification of the terminal equipment is determined to be the preset equipment identification, so that the first instruction can be prevented from being frequently sent to the second video processing equipment, correspondingly, the probability that the second video processing equipment receives a plurality of first instructions in a short time is favorably reduced, and the situation that the second video processing equipment frequently determines the resolution again is favorably avoided.
In an embodiment of the present application, the second video processing device determines at least two resolutions for resolution adjustment of the video stream to be processed.
Step S202: and the second video processing equipment acquires the video stream to be processed.
The number of the video streams to be processed may be one or multiple. Each of the plurality of to-be-processed video streams may have different image contents from each other. For example, the multiple to-be-processed video streams may be different video streams acquired at different perspectives of the same scene, or the multiple to-be-processed video streams may be different video streams acquired at different scenes at the same time.
In one implementation, the multiple to-be-processed video streams may be sent to a second video processing device by the same device, and each to-be-processed video stream in the multiple to-be-processed video streams may be acquired by a different video acquisition device connected to the device. Different video capture devices connected to the device may be used to capture video streams from different perspectives of the same site, or different video capture devices connected to the device may be used to capture video streams from different sites at the same time. The device can be connected with the video acquisition device in a physical connection mode or a logic connection mode.
In one implementation, the multiple pending video streams may be composed of at least two pending video streams sent by the device to the second video processing device. For example, when 3 pending video streams are included, two pending video streams may be from the same device, and another pending video stream may be from another device.
In one implementation, the second video processing device may obtain multiple paths of to-be-processed video streams from a local database, and the to-be-processed video streams stored in the local database may be acquired by a video acquisition device connected to the second video processing device.
Step S203: the second video processing equipment adjusts the resolution of the video stream to be processed to obtain at least two paths of processed video streams; the resolution of each of the at least two processed video streams is the same as one of the at least two resolutions, and each of the at least two processed video streams has the same image content as the to-be-processed video stream.
Specifically, after the second video processing device obtains the video stream to be processed, resolution adjustment may be performed on the video stream to be processed according to at least two resolutions, so as to obtain at least two processed video streams with different resolutions. In this embodiment of the present application, the number of processed video streams obtained after performing resolution adjustment on the video stream to be processed may be the same as the number of types of resolutions determined by the second video processing device. And the resolution of each processed video stream may be the same as one of the aforementioned at least two resolutions. For example, when the two resolutions determined by the second video processing device are 500x500 and 1000x1000, two paths of processed video streams may be obtained after performing resolution adjustment on the video stream to be processed, where the resolution of one path of processed video stream may be 500x500, and the resolution of the other path of processed video stream may be 1000x 1000. It should be noted that the video stream to be processed includes multiple frames of images, the resolution of each frame of image in the same video stream to be processed is the same, and the resolution of each frame of image is the resolution of the video stream to be processed. The meaning of performing resolution adjustment on the video stream to be processed may be: and adjusting the resolution of each frame of image in the video stream to be processed.
In this embodiment of the present application, each of the at least two processed video streams obtained after performing resolution adjustment on the to-be-processed video stream may have the same image content as the to-be-processed video stream. For example, when the video stream to be processed includes 3 frames of images, the processed video stream 1 and the processed video stream 2 are obtained after resolution adjustment is performed on the video stream to be processed, and both the processed video stream 1 and the processed video stream 2 include 3 frames of images, a first frame of image in both the processed video stream 1 and the processed video stream 2 may be the same as an image content of a first frame of image in the video stream to be processed, and similarly, a second frame of image in both the processed video stream 1 and the processed video stream 2 may be the same as an image content of a second frame of image in the video stream to be processed, and a third frame of image in both the processed video stream 1 and the processed video stream 2 may be the same as an image content of a third frame of image in the video stream to be processed.
Wherein, the meaning that the image content of the first frame image in the processing video stream 1 is the same as the image content of the first frame image in the video stream to be processed may be: the picture presented in the display device when displaying the first frame image in the processed video stream 1 is the same as the picture presented in the display device when displaying the first frame image in the to-be-processed video stream. For example, if the two resolutions determined by the second video processing device are 500x500 and 1000x1000, a scene diagram for performing resolution adjustment on the to-be-processed image in the to-be-processed video stream may be as shown in fig. 2 b. As can be seen from fig. 2b, by adjusting the resolution of the image to be processed, 2 processed images (i.e., the processed image 1 and the processed image 2) with different resolutions and the same image content can be obtained, where the resolution of the processed image 1 is 1000 × 1000 and the resolution of the processed image 2 is 500 × 500. The image contents of the image to be processed, the image to be processed 1 and the image to be processed 2 are all the same cat avatar.
In practical cases, different terminals may have different resolution requirements when displaying a processed video stream with the same image content in the different terminals. For example, one of the terminals desires to display a processed video stream including the processed image 1 in fig. 2b, and the other terminal desires to display a processed video stream including the processed image 2 in fig. 2b, i.e., the resolution of the processed video stream that one of the terminals desires to display is 1000x1000, and the resolution of the processed video stream that the other terminal desires to display is 500x 500. Optionally, in different scenarios, when the processed video stream with the same image content is displayed in the same terminal, the terminal may have different resolution requirements. Therefore, the resolution adjustment is carried out on the video stream to be processed to obtain at least two processed video streams with different resolutions and the same image content, which is beneficial to better adapting to the resolution requirement of the terminal on the displayed video stream.
In an implementation manner, after obtaining at least two processed video streams corresponding to each to-be-processed video stream, the second video processing device may send the at least two processed video streams corresponding to the to-be-processed video stream to one or more first service devices, where at least one processed video stream corresponding to the to-be-processed video stream exists in each first service device, in other words, all (or part) processed video streams corresponding to the to-be-processed video stream exist in each first service device. The first service device may be a storage device or a distribution device. Specifically, at least two processed video streams corresponding to each to-be-processed video stream may be sent to an origin server in a Content Delivery Network (CDN), and then the origin server distributes the at least two processed video streams corresponding to the to-be-processed video streams to a plurality of cache servers, that is, each cache server may store the at least two processed video streams corresponding to the to-be-processed video streams. In this way, when the user wishes to display at least two video streams simultaneously in the terminal, the user request can be responded by the near cache server, i.e. the required video stream is obtained from the near cache server. In an implementation manner, different video streams may also be obtained from a plurality of cache servers that are close to each other to form at least two video streams to be played.
In one implementation, the second video processing device may perform encapsulation processing on each processed video stream, and then send the encapsulated processed video stream to the first service device. The first service device may decapsulate the received encapsulated processed video stream, or may not decapsulate the received encapsulated processed video stream. In other words, the processed video stream existing in the first service device may be a video stream after decapsulation, or may be an encapsulated video stream.
In the embodiment of the application, resolution adjustment is performed on the video stream to be processed to obtain at least two processed video streams with different resolutions and the same image content, which is beneficial to better adapt to the resolution requirement of the terminal on the displayed video stream.
And performing synchronous processing on at least two frames of images with acquisition time in the same synchronous window in the at least two paths of video streams to be processed, wherein the at least two frames of images with acquisition time in the same synchronous window after the synchronous processing all carry the same playing time.
Referring to fig. 3a, fig. 3a is a schematic flowchart of another video processing method according to an embodiment of the present disclosure. The method describes how to perform synchronous processing on at least two frames of images with acquisition time in the same synchronous window in the at least two paths of video streams to be processed in detail, so that the at least two frames of images with acquisition time in the same synchronous window after synchronous processing all carry the same playing time. The execution subject of steps S301 to S304 is the second video processing device or a chip in the second video processing device, and the second video processing device is taken as the execution subject of the video processing method as an example to be described below. As shown in fig. 3a, the method may include, but is not limited to, the following steps:
step S301: the second video processing device determines at least two resolutions.
Step S302: the second video processing device obtains at least two paths of video streams to be processed, each path of video stream to be processed comprises a plurality of frames of images, and each frame of image carries acquisition time.
In this embodiment of the present application, each to-be-processed video stream acquired by the second video processing device may include multiple frames of images, and each frame of image may carry respective acquisition time. The capture time may represent a system time of the video capture device at the time the image was captured by the video capture device. In practical situations, the system time of a video capture device may deviate from the actual time, which may result in that the capture time carried by each frame of image captured by the video capture device may not be the time at which the image is actually captured. In this case, the second video processing device may determine a deviation time of the video capture device corresponding to each acquired path of the video stream to be processed, and adjust the capture time of each frame of image in the path of the video stream to be processed according to the deviation time, where the capture time carried by each frame of image after adjustment is the time that the image is actually captured. Specifically, for a certain image, the adjusted acquisition time of the image may be obtained by superimposing the offset time of the video acquisition device acquiring the image on the acquisition time before the adjustment.
In one implementation, the system time of the video capture device may also be calibrated by the actual time, so that the system time of the video capture device coincides with the actual time. By calibrating the system time of the video acquisition equipment, the acquisition time of each frame of image in the video stream to be processed, acquired by the calibrated video acquisition equipment, can be ensured to be the actual acquisition time of the image, so that the adjustment of the acquisition time of each frame of image in the video stream to be processed can be avoided. In addition, in practical situations, the system time of different video capture devices at the same time may be different. Thus, the acquisition times carried by the images acquired by different video acquisition devices at the same time may be different. In this case, the system time of each video capture device may be calibrated by the actual time, so as to ensure that the system time of each video capture device is consistent with the actual time.
It should be noted that the remaining execution processes of step S301 to step S302 can be referred to the specific descriptions of step S201 to step S202 in fig. 2a, and are not described herein again.
Step S303: and the second video processing equipment synchronously processes at least two frames of images with the acquisition time in the same synchronous window in the at least two paths of video streams to be processed, and the at least two frames of images with the acquisition time in the same synchronous window after the synchronous processing all carry the same playing time.
In this embodiment of the application, after the second video processing device obtains at least two paths of to-be-processed video streams, it may be determined whether images in each path of to-be-processed video streams are acquired at the same time according to acquisition time carried by images in each path of to-be-processed video streams. If the acquisition time of each frame of image in the video stream to be processed 1 is the same as the acquisition time of each frame of image in the video stream to be processed 2, it indicates that each frame of image in the video stream to be processed 1 and each frame of image in the video stream to be processed 2 are acquired at the same time. However, in practical situations, during the transmission process of the video capture device transmitting the captured video stream to be processed to the second video processing device through the network or other means, the capture time carried by the image in the video stream to be processed may change. This results in images that are carried at the same acquisition time and are not actually acquired at the same time, whereas images that are carried at different acquisition times are actually acquired at the same time.
In this case, the second video processing device may determine that at least two frames of images of the acquired at least two video streams to be processed, whose acquisition times are within the same synchronization window, are acquired at the same time. The time length of the synchronization window may be less than the time length of an image acquisition interval, and the time length of the image acquisition interval may be the time length of an interval between two adjacent frames of images acquired by the video acquisition device, that is, the reciprocal of the frame rate of the video acquisition device. For example, when the frame rate of the video capture device is 24 frames/second, the duration of the image capture interval is about 0.0417 seconds, that is, one frame of image can be captured in the image capture period with the duration of 0.0417 seconds. Because the acquisition time of the images does not change greatly in the transmission process, the acquisition time of at least two frames of images in the at least two paths of video streams to be processed in the same synchronization window can be represented as follows: at least two frames of images in the at least two paths of video streams to be processed are actually acquired at the same time. Further, the second video processing device may perform synchronous processing on at least two frames of images of the at least two paths of video streams to be processed, the acquisition times of which are in the same synchronization window, after the synchronous processing, each frame of image may carry a playing time, and the at least two frames of images of which the acquisition times are in the same synchronization window both carry the same playing time.
In the embodiment of the present application, the playing time of at least two frames of images simultaneously displayed in the terminal is the same, and the at least two frames of images, whose acquisition time is in the same synchronization window, in the at least two paths of video streams to be processed are synchronously processed, that is, the at least two frames of images acquired at the same time are synchronously processed, so that the at least two frames of images in the same synchronization window all carry the same playing time, thereby facilitating the simultaneous display of the at least two frames of images acquired at the same time on the terminal. On the other hand, the time length of the synchronization window is less than the time length of the image acquisition interval, so that two adjacent frames of images acquired before and after can be prevented from being synchronized. In one implementation, the play time may be a Presentation Time Stamp (PTS) in the digital video compression format H264.
When the duration of the synchronization window is 30 milliseconds (ms), the capture time carried by the image 1 in the video stream to be processed 1 acquired by the second video processing device is 00:10 (seconds: ms), the capture time carried by the image 2 in the video stream to be processed 2 is 00:20, and the capture time carried by the image 3 in the video stream to be processed 3 is 00:30, a scene diagram for performing synchronization processing on the image 1, the image 2, and the image 3 may be as shown in fig. 3 b. In fig. 3b, the gray filled polygon represents the image in the video stream to be processed, and the time axis represents the capture time (i.e. the capture time after being changed by transmission) carried by the image in the video stream to be processed acquired by the second video processing device. The synchronization window is a time period with the acquisition time carried by the image 2 as the center and the duration of 30 ms. As can be seen from fig. 3b, the capture times carried by image 1, image 2 and image 3 are all located within the same synchronization window, and at this time, the second video processing device can use the capture time carried by image 2 as the playing time of image 1, image 2 and image 3 (not shown in fig. 3 b).
In one implementation, the second video processing device may use a center time of the synchronization window as a playing time of at least two frames of images whose acquisition times are within the synchronization window. It should be noted that the synchronization window in fig. 3b is centered on the acquisition time carried by the image 2, and the time period with the duration of 30ms is only for example and does not constitute a limitation to the embodiment of the present application. In addition, the video streams to be processed in fig. 3b (e.g., the video stream to be processed 1, the video stream to be processed 2, and the video stream to be processed 3) may also include other images. Taking the video stream 1 to be processed as an example, the synchronization process for the images other than the image 1 in the video stream 1 to be processed can be the same as the synchronization process for the image 1 except that the synchronization windows are different. In an implementation manner, the second video processing device may determine a time period occupied by the current synchronization window according to a time period occupied by the previous synchronization window, and further perform synchronization processing on at least two frames of images, of the at least two to-be-processed video streams, whose acquisition times are within the current synchronization window, according to the time period occupied by the current synchronization window and the center time. The duration of each synchronization window may be the same, and the end time of the last synchronization window may be the start time of the current synchronization window.
Step S304: aiming at each path of video stream to be processed in the at least two paths of video streams to be processed, the second video processing equipment adjusts the resolution of the video stream to be processed to obtain at least two paths of processed video streams; the resolution of each of the at least two processed video streams is the same as one of the at least two resolutions, and each of the at least two processed video streams has the same image content as the to-be-processed video stream.
It should be noted that, the execution process of step S304 can refer to the specific description of step S203 in fig. 2a, and is not described herein again.
In the embodiment of the present application, at least two frames of images, whose acquisition times are in the same synchronization window, in the at least two paths of video streams to be processed are processed synchronously, that is, at least two frames of images acquired at the same time are processed synchronously, so that the at least two frames of images in the same synchronization window all carry the same playing time, thereby facilitating the simultaneous display of the at least two frames of images acquired at the same time on the terminal.
Referring to fig. 4a, fig. 4a is a schematic flowchart of another video processing method according to an embodiment of the present disclosure. The method details how the process video stream is divided into multiple sub-video streams. The execution subject of steps S401 to S405 is the second video processing device or a chip in the second video processing device, and the second video processing device is taken as the execution subject of the video processing method for example. As shown in fig. 4a, the method may include, but is not limited to, the following steps:
step S401: the second video processing device determines at least two resolutions.
Step S402: and the second video processing equipment acquires the video stream to be processed.
Step S403: the second video processing equipment adjusts the resolution of the video stream to be processed to obtain at least two paths of processed video streams; the resolution of each of the at least two processed video streams is the same as one of the at least two resolutions, and each of the at least two processed video streams has the same image content as the to-be-processed video stream.
It should be noted that, the execution processes of step S401 to step S403 can be respectively referred to the specific descriptions of step S201 to step S203 in fig. 2a, and are not described herein again.
Step S404: the second video processing apparatus acquires video division information corresponding to each of the aforementioned at least two resolutions.
In the embodiment of the present application, while the second video processing device acquires at least two resolutions, it may also acquire video partition information corresponding to each resolution. Specifically, for each of the aforementioned at least two resolutions, the video partition information corresponding to the resolution may be the same as the source of the resolution, in other words, the second video processing device may obtain the resolution and the video partition information corresponding to the resolution from the same device. The video partition information corresponding to the resolution may indicate: the resolution of the processed video stream is divided into how many sub-video streams. Further, the second video processing device may send multiple sub-video streams corresponding to the processed video stream to one or more second service devices, where at least one sub-video stream may exist in each second service device. In this way, when the terminal needs to display the processed video stream, the first video processing device can acquire different sub-video streams for composing the processed video stream from different second service devices in parallel, thereby being beneficial to improving the efficiency of acquiring the processed video stream.
In one implementation, for each of the aforementioned at least two resolutions, the video partition information corresponding to the resolution may indicate: the resolution of the processed video stream is divided into how many sub-video streams, and where in the same processed video stream as the resolution. In one implementation, the video partition information corresponding to each of the at least two resolutions may be preset according to a user operation, or the second video processing device may receive a first instruction sent by the service device, where the first instruction may be used to indicate the at least two resolutions and the video partition information corresponding to each of the at least two resolutions.
Step S405: and aiming at each processed video stream in the at least two processed video streams, the second video processing equipment divides the processed video stream into a plurality of sub-video streams according to the video division information corresponding to the resolution of the processed video stream.
Specifically, for each of the at least two processed video streams, if the video partition information corresponding to the resolution of the processed video stream indicates: the processed video stream of the resolution is divided into n sub-video streams, and the second video processing device can divide the processed video stream into n sub-video streams uniformly. Alternatively, the second video processing device may randomly divide the processed video stream into n sub-video streams. Wherein n may be greater than 1. It should be noted that the processing video stream includes multiple frames of images, and the meaning of dividing the processing video stream is: each frame of image in the processed video stream is divided. The positions for dividing each frame of image in the same processing video stream are the same.
In one implementation, if the video partition information corresponding to the resolution further indicates: in which position in the processing video stream that is the same as the resolution is divided, the second video processing apparatus may divide the processing video stream that is the same as the resolution in accordance with the division position indicated by the video division information.
When one of the resolutions determined by the second video processing apparatus is 1000x1000 and the video division information corresponding to the resolution 1000x1000 indicates that division is performed at 1/3 in the height direction of the processed video stream having the resolution 1000x1000, a scene diagram for dividing the processed video stream may be as shown in fig. 4 b. As shown in fig. 4b, the processed video stream can be divided into 2 sub-video streams (sub-video stream 1 and sub-video stream 2) by dotted lines. It should be noted that the video division information indicates that the division is performed in the height direction of the processed video stream for example only, and in other possible implementations, the video division information may also indicate that the division is performed in the width direction of the processed video stream, or both the width direction and the height direction.
In an implementation manner, for each of the at least two paths of processed video streams, each of the multiple paths of sub-video streams obtained by dividing the processed video stream may carry position information of the sub-video stream in the processed video stream, so that the original processed video stream can be obtained by splicing according to the position information carried by each sub-video stream. If the sub-video stream is divided in the height direction, the position information of the sub-video stream in the processing video stream may indicate that the sub-video stream is located on the upper side (middle or lower side) in the processing video stream. If the sub-video stream is divided in the width direction, the position information of the sub-video stream in the processing video stream may indicate that the sub-video stream is located on the left side (middle or right side) of the processing video stream. If the sub-video stream is divided in the height direction and the width direction, the position information of the sub-video stream in the processing video stream may indicate the coordinates of the sub-video stream in the coordinate system corresponding to the processing video stream.
In the embodiment of the application, the processing video stream is divided into multiple paths of sub-video streams, so that one path of complete processing video stream is composed of multiple sub-video streams, and different sub-video streams composing the same processing video stream can be sent to multiple second service devices. In this way, when the terminal needs to display the processing video stream, different sub-video streams constituting the processing video stream can be acquired from different second service devices in parallel, thereby being beneficial to improving the acquisition efficiency of the processing video stream.
Referring to fig. 5a, fig. 5a is a schematic flowchart of another video processing method according to an embodiment of the present disclosure. The method describes in detail how to combine at least two video streams to be displayed, which are required to be displayed by a terminal, into one target video stream. The execution subject of steps S501 to S503 is the first video processing device or a chip in the first video processing device, and the first video processing device is taken as the execution subject of the video processing method for example. The method may include, but is not limited to, the steps of:
step S501: the first video processing device obtains video layout parameters of a terminal, wherein the video layout parameters are used for indicating identification information of at least two paths of video streams to be displayed required by the terminal and the resolution of each path of video stream to be displayed.
In this embodiment, the terminal may send a video stream composition request to the first video processing device when multiple video streams need to be displayed, where the video stream composition request may include video layout parameters of the terminal. Accordingly, the first video processing device can receive the video stream composition request transmitted by the terminal. The method for sending the video layout parameters to the first video processing equipment through the terminal can enable the identification information or the resolution of the video stream to be displayed, which is required to be displayed by the terminal, to be changed, and the first video processing equipment can obtain the video stream to be displayed, which is required to be displayed at the terminal currently, according to the video layout parameters sent by the terminal, so that the requirements of terminal users can be met better.
The same identification information may correspond to one or more video streams, but the resolution of each video stream in the video streams corresponding to the same identification information may be different. Therefore, the at least two to-be-displayed video streams required to be displayed by the terminal can be determined according to the identification information of the at least two to-be-displayed video streams required to be displayed by the terminal and the resolution of each to-be-displayed video stream.
In one implementation, the video stream composition request may include a Uniform Resource Locator (URL) carrying video layout parameters of the terminal. For example, the URL is http:// myexample. com/mystream ═ 1& v1 ═ 2& v2 ═ 3& v3 ═ 4, where http is a transport protocol, myexample. com is a domain name of a device in which a video stream to be displayed required by a user exists, and/mystream is a path in the device in which the video stream to be displayed required by the user is stored. 1, 2, 3 and 4 of main & v 1& v 2& v3 4 may be identification information of a video stream to be displayed, main may be used to indicate a higher resolution (e.g., 1000x1000), and v may be used to indicate a lower resolution (e.g., 500x 500). main 1 may indicate that a video stream to be displayed having the same resolution as that indicated by main needs to be acquired from the video stream indicated by identification information 1. The v 1-2 may indicate that the video stream to be displayed having the same resolution as that indicated by v1 needs to be acquired from the video stream indicated by the identification information 2. Similarly, v 2-3 and v 3-4 are understood, and are not repeated here. In one implementation, the resolutions indicated by v1, v2 and v3 may be the same or different, and this is not limited by the embodiments of the present application.
In one implementation manner, the at least two to-be-displayed video streams may be displayed in different display areas in a display device of the terminal, and one display area may be used for displaying one to-be-displayed video stream. The video layout parameters may be used to indicate identification information of the video stream to be displayed and its resolution to be displayed in the respective display areas of the terminal. In one implementation, if in different scenes, the identification information of the video stream to be displayed, which needs to be displayed in the same display area in the same terminal, changes, but the resolution does not change. At this time, the video stream composition request transmitted by the terminal to the first video processing apparatus may include only: the user wishes to display identification information of the video stream to be displayed in the respective display areas in the display device of the terminal. After receiving a video stream composition request sent by a terminal, a first video processing device may obtain resolutions corresponding to each display area in the terminal from a local database, and further determine identification information and resolutions of video streams to be displayed that need to be displayed in each display area of the terminal.
For example, when the display area of the terminal includes a left area and a right area, and the resolution of the video stream to be displayed that the user wishes to display in the left area is 1000 × 1000 and the resolution of the video stream to be displayed in the right area is 500 × 500, the first video processing apparatus may acquire and store the resolution corresponding to the left area and the resolution corresponding to the right area of the terminal from the terminal. When the terminal needs to display at least two video streams, the terminal may send identification information 1 and identification information 2 to the first video processing device, where the video stream to be displayed corresponding to the identification information 1 is used for displaying in a left area of the terminal, and the video stream to be displayed corresponding to the identification information 2 is used for displaying in a right area of the terminal. After the first video processing device receives the identification information 1 and the identification information 2, by combining the pre-stored resolution of the video stream to be displayed on the left area of the terminal and the pre-stored resolution of the video stream to be displayed on the right area of the terminal, it can be determined that the resolution of the video stream to be displayed, which is indicated by the identification information 1, when the user wants to be displayed in the terminal is 1000x1000, and the resolution of the video stream to be displayed, which is indicated by the identification information 2, when the video stream to be displayed is displayed in the terminal is 500x 500. In this way, the amount of data transmitted by the terminal to the first video processing device can be reduced.
In an implementation manner, in different scenes, resolutions of video streams to be displayed, which need to be displayed in the same display area in the same terminal, may be different, and at this time, a video stream composition request sent by the terminal to the first video processing device may include identification information and resolutions of the video streams to be displayed, which need to be displayed in each display area in the terminal.
Step S502: and the first video processing equipment acquires the at least two paths of video streams to be displayed according to the video layout parameters.
In this embodiment of the present application, the first video processing device may send a video stream acquisition request to the first service device, where the video stream acquisition request may include identification information of the at least two to-be-displayed video streams and a resolution of each to-be-displayed video stream; and receiving the at least two video streams to be displayed returned by the first service equipment. The number of the first service devices may be one or more. When the number of the first service devices is multiple, the different video streams to be displayed acquired by the first video processing device may be from different first service devices. By the method, different video streams to be displayed can be acquired from different first service devices in parallel, so that the acquisition efficiency of at least two paths of video streams to be displayed which need to be displayed is improved. Each first service device may have at least one processing video stream, and after receiving the video stream acquisition request, the first service device may use the processing video stream having the same identification information and resolution as those in the video stream acquisition request as a video stream to be displayed, and send the video stream to be displayed to the first video processing device. That is, the video stream to be displayed acquired by the first video processing device may be the processed video stream in the embodiment shown in fig. 2a to 4 a.
In an implementation manner, a specific implementation manner of the first video processing device obtaining the at least two video streams to be displayed according to the video layout parameter may be: aiming at the identification information of each path of video stream to be displayed in the at least two paths of video streams to be displayed, acquiring a plurality of paths of processed video streams corresponding to the identification information of the video stream to be displayed, wherein the resolutions of the plurality of paths of processed video streams are different from each other, and each path of processed video stream in the plurality of paths of processed video streams has the same image content as the video stream to be displayed; and taking the processed video stream with the resolution same as that of the video stream to be displayed in the multi-path processed video stream as the video stream to be displayed. In this embodiment, the same identification information may correspond to one or more processed video streams, in other words, the identification information of the multiple processed video streams may be the same. Specifically, the identification information of different processed video streams having the same image content may be the same. The processing video streams may include multiple frames of images, and the different processing video streams have the same image content, which means that corresponding images in each processing video stream have the same image content. For example, in fig. 2b, the image to be processed, the processed image 1 and the processed image 2 have the same image content, the identification information of the video stream to be processed to which the image to be processed belongs, the processed video stream 1 to which the processed image 1 belongs and the processed video stream 2 to which the processed image 2 belongs may be the same. It should be noted that, multiple processed video streams with the same identification information may be obtained after resolution adjustment is performed on the same video stream to be processed by the second video processing apparatus (see the detailed description of step S203 in fig. 2 a).
For each to-be-displayed video stream of the at least two to-be-displayed video streams, after the first video processing device obtains the multiple processed video streams corresponding to the identification information of the to-be-displayed video stream, since the resolutions of the multiple processed video streams are different from each other, the first video processing device may use, as the to-be-displayed video stream, the processed video stream of the multiple processed video streams having the same resolution as the to-be-displayed video stream. The multiple processed video streams corresponding to the identification information of the video stream to be displayed may be stored in a local database of the first video processing device, and at this time, the first video processing device may obtain the multiple processed video streams corresponding to the identification information of the video stream to be displayed from the local database. Or, the first video processing device may send a processed video stream acquisition request to the service device, where the processed video stream acquisition request may include identification information of the at least two paths of video streams to be displayed; and receiving a plurality of paths of processing video streams which are returned by the service equipment and correspond to the identification information of each path of video stream to be displayed in the at least two paths of video streams to be displayed. The number of the service devices may be one or more.
In an implementation manner, a specific implementation manner of the first video processing device obtaining the at least two video streams to be displayed according to the video layout parameter may further be that: aiming at the identification information of each path of video stream to be displayed in the at least two paths of video streams to be displayed, sending an index acquisition request carrying the identification information to third service equipment, and receiving the index of the multi-path processed video stream corresponding to the identification information and the resolution of each path of processed video stream returned by the third service equipment; the first video processing device determines a target index from the indexes of the multiple paths of processed video streams, and the resolution of the processed video stream corresponding to the target index is the same as the resolution of the video stream to be displayed; and sending a stream acquisition request carrying the target index to the third service equipment, receiving a processing video stream corresponding to the target index returned by the third service equipment, and taking the processing video stream corresponding to the target index as the video stream to be displayed. Compared with the mode of acquiring the multiple paths of processing video streams corresponding to the identification information of the video stream to be displayed and determining the video stream to be displayed, the method can reduce the data volume transmitted between the first video processing device and the third service device by acquiring the indexes of the processing video streams and the resolution of the processing video stream corresponding to each index to determine the video stream to be displayed. The first service device, the second service device, and the third service device may all be the service device 103 in fig. 1.
Step S503: the first video processing device synthesizes the at least two to-be-displayed video streams into a target video stream, and displays the target video stream on the terminal.
Specifically, after the first video processing device obtains the at least two to-be-displayed video streams, the first video processing device may combine the at least two to-be-displayed video streams into one target video stream, and display the target video stream on the terminal. Because the target video stream is synthesized by the at least two to-be-displayed video streams, the picture presented in the terminal when the target video stream is displayed is formed by splicing at least two sub-pictures, so that a user can watch a plurality of sub-pictures in the terminal at the same time. It should be noted that the first video processing device may be integrated with the terminal in the same physical entity or integrated with different physical entities respectively. When the first video processing device and the terminal are integrated in different physical entities, the first video processing device can send the target video stream to the terminal for display after synthesizing the target video stream.
In this embodiment of the application, when at least two to-be-displayed video streams that the terminal needs to display change, that is, when the identification information of the at least two to-be-displayed video streams that the terminal needs to display, indicated by the video layout parameter, changes, for example, the number of the to-be-displayed video streams that the terminal needs to display increases or decreases, or the identification information of some or all of the at least two to-be-displayed video streams that the terminal needs to display changes. The first video processing device may obtain the video stream to be displayed indicated by the changed (or newly added) identification information, and then combine the newly obtained video stream to be displayed and the video stream to be displayed indicated by the unchanged identification information into a target video stream. In this process, the first video processing device may not need to retrieve the video stream to be displayed indicated by the unchanged (or non-newly added) identification information, which is beneficial to improving the utilization rate of the video stream to be displayed. For example, when two paths of video streams to be displayed, which need to be displayed by the terminal, are changed from the video stream to be displayed 1 and the video stream to be displayed 2 to the video stream to be displayed 1 and the video stream to be displayed 3, since the video stream to be displayed 1 is already acquired before the video stream to be displayed 1 and the video stream to be displayed 2 are synthesized, the first video processing device can synthesize the video stream to be displayed 1 and the video stream to be displayed 3 only by acquiring the video stream to be displayed 3. The video stream 1 to be displayed is multiplexed in the process of synthesizing the video stream 1 to be displayed and the video stream 3 to be displayed into the target video stream, so that the utilization rate of the video stream 1 to be displayed is improved.
In one implementation, after obtaining the target video stream, the first video processing device may perform encapsulation processing on the target video stream, and send the encapsulated target video stream to the terminal. The target video stream is a video stream, so that after the terminal receives the encapsulated target video stream, one decapsulation operation is executed, and the terminal can display a plurality of sub-pictures only by one video player.
In one implementation, each video stream to be displayed may include multiple frames of images, and each frame of image may carry a playing time (see the description of step S303 in fig. 3a for a description of the playing time). The specific implementation manner of the first video processing device synthesizing the at least two to-be-displayed video streams into one target video stream may be: and synthesizing the images with the same playing time in the at least two paths of video streams to be displayed into a frame of target image, wherein all the target images form a path of target video stream. The images with the same playing time are the images acquired at the same time. In this way, it is ensured that the frame images constituting the target image are captured at the same time, that is, it is ensured that when the target video stream is displayed in the terminal, the sub-pictures simultaneously displayed in the terminal are pictures at the same time.
In one implementation, the video layout parameter may further indicate a display position of at least two video streams to be displayed, which are required to be displayed by the terminal, when the video streams are displayed in the terminal. The display position may be a position of the video stream to be displayed in a display device of the terminal, for example, a coordinate area occupied by the video stream to be displayed in the display device. The first video processing device may combine at least two paths of video streams to be displayed, which are required to be displayed by the terminal, into one path of target video stream according to a display position of each path of video streams to be displayed, which is required to be displayed by the terminal, when being displayed in the terminal. For example, when the video layout parameter indicates that the terminal needs to display the video stream 1 to be displayed, the video stream 2 to be displayed, and the video stream 3 to be displayed, and the video layout parameter also indicates that the display positions of the video stream 1 to be displayed, the video stream 2 to be displayed, and the video stream 3 to be displayed when displayed in the terminal are the left side, the upper right corner, and the lower right corner, respectively, a scene schematic diagram of synthesizing the video stream 1 to be displayed, the video stream 2 to be displayed, and the video stream 3 to be displayed into one path of target video stream may be as shown in fig. 5 b.
In one implementation manner, the at least two to-be-displayed video streams may include a first to-be-displayed video stream and a second to-be-displayed video stream; if the resolution of the first to-be-displayed video stream is higher than the resolution of the second to-be-displayed video stream, the display area occupied by the first to-be-displayed video stream in the terminal may be larger than the display area occupied by the second to-be-displayed video stream in the terminal. It is understood that the user has a higher attention to the video stream occupying a larger display area in the terminal than to the video stream occupying a smaller display area in the terminal. In this way, the resolution of the video stream with larger display area occupied in the terminal can be higher, that is, the video stream with larger display area occupied in the terminal is clearer, which is beneficial to improving the user experience.
By implementing the embodiment of the application, the target video stream synthesized by at least two video streams to be displayed can be displayed in the terminal, and the picture presented in the terminal is formed by splicing at least two sub-pictures when the target video stream is displayed. On the other hand, because the target video stream is a video stream, after the terminal receives the encapsulated target video stream, a decapsulation operation is performed once and the terminal only needs one video player, so that the purpose of displaying a plurality of sub-pictures can be achieved.
Referring to fig. 6, fig. 6 is a schematic flowchart illustrating another video processing method according to an embodiment of the present disclosure. The method describes how to acquire the multiple paths of sub-video streams corresponding to the identification information and the resolution of the video stream to be displayed in detail, and how to synthesize the multiple paths of sub-video streams into the video stream to be displayed. The execution subject of steps S601 to S605 is the first video processing device or a chip in the first video processing device, and the first video processing device is taken as the execution subject of the video processing method as an example to be described below. The method may include, but is not limited to, the steps of:
step S601: the first video processing device obtains video layout parameters of a terminal, wherein the video layout parameters are used for indicating identification information of at least two paths of video streams to be displayed required by the terminal and the resolution of each path of video stream to be displayed.
It should be noted that, the execution process of step S601 may refer to the specific description of step S501 in fig. 5a, and is not described herein again.
Step S602: and aiming at the identification information of each path of video stream to be displayed in the at least two paths of video streams to be displayed, the first video processing equipment sends a sub-video stream acquisition request to the second service equipment, wherein the sub-video stream acquisition request comprises the identification information and the resolution of the video stream to be displayed.
In this embodiment of the application, the first video processing device may obtain, according to the video layout parameter, at least two to-be-displayed video streams that the terminal needs to display, where each to-be-displayed video stream may be composed of multiple sub-video streams. The sub-video stream obtaining request sent by the first video processing device to the second service device may be used to request to obtain the sub-video streams of the to-be-displayed video streams that need to be displayed by the terminal. The number of the second service devices may be one or more, in other words, different sub-video streams constituting the same video stream to be displayed may come from the same or different second service devices, and each second service device may store part or all of the sub-video streams constituting the video stream to be displayed. In this way, the first video processing device can acquire different sub-video streams from different second service devices in parallel to form a complete video stream to be displayed, thereby being beneficial to improving the acquisition efficiency of the video stream to be displayed.
It should be noted that the resolution in the sub-video stream acquisition request refers to the resolution of the video stream to be displayed, which is composed of multiple sub-video streams that need to be acquired by the first video processing apparatus. For example, when the identification information 1 corresponds to the video stream to be displayed 1 and the video stream to be displayed 2, the resolutions of the video stream to be displayed 1 and the video stream to be displayed 2 are 1000x1000 and 500x500, respectively, the video stream to be displayed 1 is composed of the sub-video stream 1, the sub-video stream 2 and the sub-video stream 3, and the video stream to be displayed 2 is composed of the sub-video stream 4 and the sub-video stream 5, if the sub-video stream acquisition request sent by the first video processing apparatus includes the identification information 1 and the resolution 1000x1000, the first video processing apparatus may receive the sub-video stream 1, the sub-video stream 2 and the sub-video stream 3.
Step S603: and the first video processing device receives the multi-path sub-video stream which is returned by the second service device and corresponds to the identification information and the resolution of the video stream to be displayed.
Specifically, the first video processing device may receive multiple sub-video streams corresponding to the identification information and the resolution of the video stream to be displayed, which are returned by one or more second service devices.
Step S604: and the first video processing device synthesizes the plurality of paths of sub-video streams into the video stream to be displayed.
Specifically, after receiving the multiple sub-video streams corresponding to the identification information and the resolution of the video stream to be displayed, the first video processing device may synthesize the multiple sub-video streams into the video stream to be displayed. It should be noted that each of the sub-video streams composing the same video stream to be displayed may include multiple frames of images, and each of the sub-video streams includes the same number of images. The specific implementation of synthesizing the multiple sub-video streams into the video stream to be displayed may be as follows: according to the sequence of the image frames in each path of sub-video stream in the multi-path sub-video stream, the images with the same sequence of the image frames in each path of sub-video stream are spliced into the images to be displayed, and all the images to be displayed form the video stream to be displayed.
In one implementation, each frame of image in each sub-video stream may carry a playing time (see the description of step S303 in fig. 3a for a description of the playing time). The specific implementation manner of the first video processing device synthesizing the multiple paths of sub-video streams into the video stream to be displayed may be: and synthesizing the images with the same playing time in the multi-path sub-video stream into a frame of image to be displayed, wherein all the images to be displayed form the video stream to be displayed.
The multiple paths of sub-video streams corresponding to the video stream to be displayed may be obtained by dividing, by the second video processing device, the processed video stream, which is the video stream to be displayed. In an implementation manner, each path of sub-video stream may carry position information of the sub-video stream in a corresponding processing video stream, and the first video processing device may synthesize the obtained multiple paths of sub-video streams into a video stream to be displayed according to the obtained position information of each sub-video stream in the corresponding processing video stream. In this way, the video stream to be displayed is synthesized accurately and quickly. The position information of the sub-video stream in the corresponding processing video stream may indicate that the sub-video stream is located on the upper side (middle, lower side, left side, or right side) in the processing video stream, or may indicate the coordinates of the sub-video stream in the coordinate system corresponding to the processing video stream.
Step S605: the first video processing device synthesizes the at least two to-be-displayed video streams into a target video stream, and displays the target video stream on the terminal.
It should be noted that, the execution process of step S605 may refer to the specific description of step S503 in fig. 5a, and is not described herein again.
In the embodiment of the application, when the to-be-displayed video stream to be displayed by the terminal is composed of multiple paths of sub-video streams, the multiple paths of sub-video streams are subjected to synthesis processing, so that a complete to-be-displayed video stream can be synthesized. And then synthesizing the synthesized at least two video streams to be displayed into a target video stream which is expected to be displayed on the terminal by the user, wherein the picture presented in the terminal is formed by splicing at least two sub-pictures when the target video stream is displayed. In this way, the user is enabled to view at least two sub-pictures simultaneously in the terminal.
The method disclosed in the embodiments of the present application is explained in detail above, and the apparatus of the embodiments of the present application will be provided below.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a first video processing apparatus according to an embodiment of the present application, where the apparatus may be a first video processing device or an apparatus (e.g., a chip) having a function of the first video processing device, and the first video processing apparatus 70 is configured to perform the steps performed by the first video processing device in the method embodiments corresponding to fig. 5a to fig. 6, where the first video processing apparatus 70 includes:
an obtaining module 701, configured to obtain a video layout parameter of a terminal, where the video layout parameter is used to indicate identification information of at least two to-be-displayed video streams that the terminal needs to display and a resolution of each to-be-displayed video stream;
the obtaining module 701 is further configured to obtain the at least two video streams to be displayed according to the video layout parameter;
a processing module 702, configured to combine the at least two video streams to be displayed into a target video stream, and display the target video stream on the terminal.
In an implementation manner, the obtaining module 701 is configured to, when obtaining the video layout parameters of the terminal, specifically, receive a video stream composition request sent by the terminal, where the video stream composition request includes the video layout parameters of the terminal.
In an implementation manner, the obtaining module 701 is configured to, when obtaining the at least two to-be-displayed video streams according to the video layout parameter, specifically, send a video stream obtaining request to the first service device, where the video stream obtaining request includes identification information of the at least two to-be-displayed video streams and a resolution of each to-be-displayed video stream; and receiving the at least two video streams to be displayed returned by the first service equipment.
In an implementation manner, the obtaining module 701 is configured to, when obtaining the at least two paths of video streams to be displayed according to the video layout parameter, specifically, obtain, for identification information of each path of video stream to be displayed in the at least two paths of video streams to be displayed, a plurality of paths of processed video streams corresponding to the identification information of the video stream to be displayed, where resolutions of the plurality of paths of processed video streams are different from each other, and each path of processed video stream in the plurality of paths of processed video streams and the video stream to be displayed have the same image content; and taking the processed video stream with the resolution same as that of the video stream to be displayed in the multi-path processed video stream as the video stream to be displayed.
In an implementation manner, the obtaining module 701 is configured to, when obtaining the at least two paths of video streams to be displayed according to the video layout parameter, specifically, send a sub-video stream obtaining request to the second service device for identification information of each path of video stream to be displayed in the at least two paths of video streams to be displayed, where the sub-video stream obtaining request includes the identification information and the resolution of the video stream to be displayed; receiving a plurality of paths of sub-video streams which are returned by the second service equipment and correspond to the identification information and the resolution of the video stream to be displayed; and synthesizing the plurality of paths of sub-video streams into the video stream to be displayed.
In one implementation, each path of video stream to be displayed includes multiple frames of images, and each frame of image carries a playing time; the processing module 702 is configured to, when the at least two to-be-displayed video streams are combined into one target video stream, specifically, combine images with the same playing time in the at least two to-be-displayed video streams into one frame of target image, where all the target images form one target video stream.
It should be noted that details that are not mentioned in the embodiment corresponding to fig. 7 and specific implementation manners of the steps executed by each module may refer to the embodiments shown in fig. 5a to fig. 6 and the foregoing details, and are not described again here.
In one implementation, the relevant functions implemented by the various modules in FIG. 7 may be implemented in connection with a processor and a communications interface. Referring to fig. 8, fig. 8 is a schematic structural diagram of another first video processing apparatus provided in this embodiment of the present application, where the apparatus may be a first video processing device or an apparatus (e.g., a chip) having functions of the first video processing device, the first video processing apparatus 80 may include a communication interface 801, a processor 802, and a memory 803, and the communication interface 801, the processor 802, and the memory 803 may be connected to each other through one or more communication buses, or may be connected in other manners. The related functions implemented by the obtaining module 701 and the processing module 702 shown in fig. 7 may be implemented by the same processor 802, or may be implemented by a plurality of different processors 802.
Communication interface 801 may be used to transmit data and/or signaling and receive data and/or signaling. In this embodiment, the communication interface 801 may be used to receive a video stream composition request sent by a terminal. The communication interface 801 may be a transceiver.
The processor 802 is configured to perform the respective functions of the first video processing device in the methods described in fig. 5 a-6. The processor 802 may include one or more processors, for example, the processor 802 may be one or more Central Processing Units (CPUs), Network Processors (NPs), hardware chips, or any combination thereof. In the case where the processor 802 is a CPU, the CPU may be a single-core CPU or a multi-core CPU.
The memory 803 is used to store program codes and the like. The memory 803 may include a volatile memory (volatile memory), such as a Random Access Memory (RAM); the memory 803 may also include a non-volatile memory (non-volatile memory), such as a read-only memory (ROM), a flash memory (flash memory), a Hard Disk Drive (HDD), or a solid-state drive (SSD); the memory 803 may also comprise a combination of memories of the kind described above. It should be noted that the first video processing apparatus 80 includes the memory 803 for example only, and is not limited to the embodiment of the present application, and in an implementation manner, the memory 803 may be replaced by another storage medium with a storage function.
The processor 802 may call program code stored in the memory 803 to cause the first video processing device 80 to:
acquiring video layout parameters of a terminal, wherein the video layout parameters are used for indicating identification information of at least two paths of video streams to be displayed required by the terminal and the resolution of each path of video stream to be displayed;
acquiring the at least two paths of video streams to be displayed according to the video layout parameters;
and synthesizing the at least two video streams to be displayed into a target video stream, and displaying the target video stream on the terminal.
In one implementation, the processor 802 calls the program code stored in the memory 803 to cause the first video processing apparatus 80 to execute the following operations when the first video processing apparatus 80 executes the acquisition of the video layout parameters of the terminal: and receiving a video stream composition request sent by a terminal, wherein the video stream composition request comprises video layout parameters of the terminal.
In one implementation, when the processor 802 calls the program code stored in the memory 803 to enable the first video processing apparatus 80 to obtain the at least two video streams to be displayed according to the video layout parameters, the first video processing apparatus 80 may specifically be enabled to perform the following operations: sending a video stream acquisition request to a first service device, wherein the video stream acquisition request comprises identification information of the at least two paths of video streams to be displayed and the resolution of each path of video stream to be displayed; and receiving the at least two video streams to be displayed returned by the first service equipment.
In one implementation, when the processor 802 calls the program code stored in the memory 803 to enable the first video processing apparatus 80 to obtain the at least two video streams to be displayed according to the video layout parameters, the first video processing apparatus 80 may specifically be enabled to perform the following operations: aiming at the identification information of each path of video stream to be displayed in the at least two paths of video streams to be displayed, acquiring a plurality of paths of processed video streams corresponding to the identification information of the video stream to be displayed, wherein the resolutions of the plurality of paths of processed video streams are different from each other, and each path of processed video stream in the plurality of paths of processed video streams has the same image content as the video stream to be displayed; and taking the processed video stream with the resolution same as that of the video stream to be displayed in the multi-path processed video stream as the video stream to be displayed.
In one implementation, when the processor 802 calls the program code stored in the memory 803 to enable the first video processing apparatus 80 to obtain the at least two video streams to be displayed according to the video layout parameters, the first video processing apparatus 80 may specifically be enabled to perform the following operations: sending a sub-video stream acquisition request to second service equipment aiming at the identification information of each path of video stream to be displayed in the at least two paths of video streams to be displayed, wherein the sub-video stream acquisition request comprises the identification information and the resolution of the video stream to be displayed; receiving a plurality of paths of sub-video streams which are returned by the second service equipment and correspond to the identification information and the resolution of the video stream to be displayed; and synthesizing the plurality of paths of sub-video streams into the video stream to be displayed.
In one implementation, each path of video stream to be displayed includes multiple frames of images, and each frame of image carries a playing time; the processor 802 calls the program code stored in the memory 803 to make the first video processing apparatus 80 execute the following operations when the at least two video streams to be displayed are combined into one target video stream: and synthesizing the images with the same playing time in the at least two paths of video streams to be displayed into a frame of target image, wherein all the target images form a path of target video stream.
Further, the processor 802 may also call the program code stored in the memory 803 to enable the first video processing apparatus 80 to execute the operation corresponding to the first video processing device in the embodiment shown in fig. 5a to 6, which may be referred to as the description in the method embodiment, and is not described herein again.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a second video processing apparatus according to an embodiment of the present disclosure, where the apparatus may be a second video processing device or an apparatus (e.g., a chip) having a function of the second video processing device, and the second video processing apparatus 90 is configured to perform the steps performed by the second video processing device in the method embodiments corresponding to fig. 2a to fig. 4a, and the second video processing apparatus 90 may include:
a determining module 901, configured to determine at least two resolutions;
an obtaining module 902, configured to obtain a video stream to be processed;
a resolution adjusting module 903, configured to perform resolution adjustment on the to-be-processed video stream to obtain at least two processed video streams; the resolution of each of the at least two processed video streams is the same as one of the at least two resolutions, and each of the at least two processed video streams has the same image content as the to-be-processed video stream.
In one implementation, the aforementioned at least two resolutions are preset.
In an implementation manner, the determining module 701 is configured to, when determining at least two resolutions, specifically, receive a first instruction sent by a service device, where the first instruction is used to indicate the at least two resolutions.
In one implementation mode, the number of the video streams to be processed is at least two, each video stream to be processed comprises a plurality of frames of images, and each frame of image carries acquisition time; the second video processing apparatus 90 may further include a processing module 904, configured to perform synchronous processing on at least two frames of images in the same synchronization window at the acquisition time in the at least two to-be-processed video streams, where the at least two frames of images in the same synchronization window at the acquisition time after the synchronous processing all carry the same playing time.
In one implementation, the second video processing device 90 may further include a partitioning module 905; the obtaining module 902 may be further configured to obtain video partition information corresponding to each of the at least two resolutions; the dividing module 905 may be configured to, for each of the at least two processed video streams, divide the processed video stream into multiple sub-video streams according to video division information corresponding to a resolution of the processed video stream.
It should be noted that details that are not mentioned in the embodiment corresponding to fig. 9 and specific implementation manners of the steps executed by each module may refer to the embodiments shown in fig. 2a to fig. 4a and the foregoing details, and are not described again here.
In one implementation, the relevant functions implemented by the various modules in FIG. 9 may be implemented in connection with a processor and a communications interface. Referring to fig. 10, fig. 10 is a schematic structural diagram of another second video processing apparatus provided in this embodiment of the present application, where the apparatus may be a second video processing device or an apparatus (e.g., a chip) having a function of the second video processing device, the second video processing apparatus 100 may include a communication interface 1001, a processor 1002, and a memory 1003, and the communication interface 1001, the processor 1002, and the memory 1003 may be connected to each other through one or more communication buses, or may be connected in other manners. The related functions implemented by the determining module 901, the obtaining module 902, the resolution adjusting module 903, the processing module 904 and the dividing module 905 shown in fig. 9 may be implemented by the same processor 1002, or may be implemented by a plurality of different processors 1002.
Communication interface 1001 may be used to transmit data and/or signaling and to receive data and/or signaling. In this embodiment, the communication interface 1001 may be configured to receive a first instruction sent by a service device. Communication interface 1001 may be a transceiver.
The processor 1002 is configured to perform the respective functions of the second video processing device in the method described in fig. 2 a-4 a. The processor 1002 may include one or more processors, for example, the processor 1002 may be one or more Central Processing Units (CPUs), Network Processors (NPs), hardware chips, or any combination thereof. In the case where the processor 1002 is a CPU, the CPU may be a single-core CPU or a multi-core CPU.
The memory 1003 is used to store program codes and the like. The memory 1003 may include volatile memory (volatile memory), such as Random Access Memory (RAM); the memory 1003 may also include a non-volatile memory (non-volatile memory), such as a read-only memory (ROM), a flash memory (flash memory), a Hard Disk Drive (HDD), or a solid-state drive (SSD); the memory 1003 may also include a combination of the above types of memories. It should be noted that the second video processing apparatus 100 includes the memory 1003 for example only, and is not limited to the embodiment of the present application, and in an implementation manner, the memory 1003 may be replaced by another storage medium with a storage function.
The processor 1002 may call the program code stored in the memory 1003 to cause the second video processing device 100 to perform the following operations:
determining at least two resolutions;
acquiring a video stream to be processed;
carrying out resolution adjustment on the video stream to be processed to obtain at least two paths of processed video streams; the resolution of each of the at least two processed video streams is the same as one of the at least two resolutions, and each of the at least two processed video streams has the same image content as the to-be-processed video stream.
In one implementation, the aforementioned at least two resolutions are preset.
In one implementation, when the processor 1002 calls the program code stored in the memory 1003 to cause the second video processing apparatus 100 to execute determining at least two resolutions, the second video processing apparatus 100 may specifically be caused to execute the following operations: and receiving a first instruction sent by the service equipment, wherein the first instruction is used for indicating the at least two resolutions.
In one implementation mode, the number of the video streams to be processed is at least two, each video stream to be processed comprises a plurality of frames of images, and each frame of image carries acquisition time; the processor 1002 may also call the program code stored in the memory 1003 to cause the second video processing device 100 to perform the following operations: and synchronously processing at least two frames of images with the acquisition time in the same synchronous window in the at least two paths of video streams to be processed, wherein the at least two frames of images with the acquisition time in the same synchronous window after the synchronous processing all carry the same playing time.
In one implementation, the processor 1002 may also call program code stored in the memory 1003 to cause the second video processing device 100 to: acquiring video division information corresponding to each of the at least two resolutions; and aiming at each path of processing video stream in the at least two paths of processing video streams, dividing the processing video stream into a plurality of paths of sub-video streams according to the video dividing information corresponding to the resolution of the processing video stream.
Further, the processor 1002 may also call the program code stored in the memory 1003 to enable the second video processing apparatus 100 to execute the operation corresponding to the second video processing device in the embodiment shown in fig. 2a to 4a, which may specifically refer to the description in the method embodiment, and is not described herein again.
The embodiment of the present application further provides a video processing system, where the video processing system includes the aforementioned first video processing apparatus shown in fig. 7 and the aforementioned second video processing apparatus shown in fig. 9, or the video processing system includes the aforementioned first video processing apparatus shown in fig. 8 and the aforementioned second video processing apparatus shown in fig. 10.
An embodiment of the present application further provides a computer readable storage medium, which can be used to store computer software instructions for the first video processing apparatus in the embodiment shown in fig. 7, and which contains a program designed for executing the first video processing apparatus in the above embodiment.
An embodiment of the present application further provides a computer readable storage medium, which can be used to store computer software instructions for the second video processing apparatus in the embodiment shown in fig. 9, and which contains a program designed for executing the second video processing apparatus in the above embodiment.
The computer readable storage medium includes, but is not limited to, flash memory, hard disk, solid state disk.
Embodiments of the present application further provide a computer program product, which when executed by a computing device, can execute the method designed for the first video processing device in the foregoing embodiments of fig. 5a to 6.
Embodiments of the present application further provide a computer program product, which, when executed by a computing device, can execute the method designed for the second video processing device in the embodiments of fig. 2a to 4 a.
There is also provided in an embodiment of the present application a chip including a processor and a memory, where the memory includes the processor and the memory, and the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, and the computer program is used to implement the method in the above method embodiment.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs. The procedures or functions according to the embodiments of the present application are wholly or partially generated when the computer program is loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer program can be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program can be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Video Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
Those of ordinary skill in the art will understand that: the various numbers of the first, second, etc. mentioned in this application are only used for the convenience of description and are not used to limit the scope of the embodiments of this application, nor to indicate the order of precedence.
At least one of the present applications may also be described as one or more, and at least two may also be described as two or more. The plurality may be two, three, four or more, and the application is not limited thereto. In the embodiment of the present application, for a technical feature, the technical features in the technical feature are distinguished by "first", "second", "third", "a", "B", "C", and "D", and the like, and the technical features described in "first", "second", "third", "a", "B", "C", and "D" are not in a sequential order or a size order.
The correspondence shown in the tables in the present application may be configured or predefined. The values of the information in each table are only examples, and may be configured to other values, which is not limited in the present application. When the correspondence between the information and each parameter is configured, it is not always necessary to configure all the correspondences indicated in each table. For example, in the table in the present application, the correspondence shown in some rows may not be configured. For another example, appropriate modification adjustments, such as splitting, merging, etc., can be made based on the above tables. The names of the parameters in the tables may be other names understandable by the communication device, and the values or the expression of the parameters may be other values or expressions understandable by the communication device. When the above tables are implemented, other data structures may be used, for example, arrays, queues, containers, stacks, linear tables, pointers, linked lists, trees, graphs, structures, classes, heaps, hash tables, or hash tables may be used.
Predefinition in this application may be understood as defining, predefining, storing, pre-negotiating, pre-configuring, curing, or pre-firing.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (26)

1. A video processing method applied to a first video processing apparatus, the method comprising:
acquiring video layout parameters of a terminal, wherein the video layout parameters are used for indicating identification information of at least two paths of video streams to be displayed required by the terminal and the resolution of each path of video stream to be displayed;
acquiring the at least two paths of video streams to be displayed according to the video layout parameters;
and synthesizing the at least two video streams to be displayed into a target video stream, and displaying the target video stream on the terminal.
2. The method of claim 1, wherein the obtaining the video layout parameters of the terminal comprises:
and receiving a video stream synthesis request sent by the terminal, wherein the video stream synthesis request comprises video layout parameters of the terminal.
3. The method according to claim 1 or 2, wherein said obtaining the at least two video streams to be displayed according to the video layout parameters comprises:
sending a video stream acquisition request to a first service device, wherein the video stream acquisition request comprises identification information of the at least two paths of video streams to be displayed and the resolution of each path of video stream to be displayed;
and receiving the at least two video streams to be displayed returned by the first service equipment.
4. The method according to claim 1 or 2, wherein said obtaining the at least two video streams to be displayed according to the video layout parameters comprises:
aiming at identification information of each path of video stream to be displayed in the at least two paths of video streams to be displayed, acquiring a plurality of paths of processed video streams corresponding to the identification information of the video stream to be displayed, wherein the resolutions of the plurality of paths of processed video streams are different from each other, and each path of processed video stream in the plurality of paths of processed video streams has the same image content as the video stream to be displayed;
and taking the processed video stream with the resolution which is the same as that of the video stream to be displayed in the multi-path processed video stream as the video stream to be displayed.
5. The method according to claim 1 or 2, wherein said obtaining the at least two video streams to be displayed according to the video layout parameters comprises:
sending a sub-video stream acquisition request to second service equipment aiming at the identification information of each path of video stream to be displayed in the at least two paths of video streams to be displayed, wherein the sub-video stream acquisition request comprises the identification information and the resolution of the video stream to be displayed;
receiving a plurality of paths of sub-video streams which are returned by the second service equipment and correspond to the identification information and the resolution of the video stream to be displayed;
and synthesizing the multi-path sub-video streams into the video stream to be displayed.
6. The method according to any one of claims 1 to 5, wherein each video stream to be displayed comprises a plurality of frames of images, each frame of image carrying a playing time; the synthesizing the at least two video streams to be displayed into one target video stream includes:
and synthesizing the images with the same playing time in the at least two video streams to be displayed into a frame of target image, wherein all the target images form a path of target video stream.
7. A video processing method applied to a second video processing apparatus, the method comprising:
determining at least two resolutions;
acquiring a video stream to be processed;
performing resolution adjustment on the video stream to be processed to obtain at least two paths of processed video streams; the resolutions of the at least two processed video streams are different from each other, the resolution of each processed video stream in the at least two processed video streams is the same as one of the at least two resolutions, and each processed video stream in the at least two processed video streams has the same image content as the to-be-processed video stream.
8. The method of claim 7, wherein the at least two resolutions are preset.
9. The method of claim 7, wherein the determining at least two resolutions comprises:
and receiving a first instruction sent by a service device, wherein the first instruction is used for indicating the at least two resolutions.
10. The method according to any one of claims 7 to 9, wherein the number of the video streams to be processed is at least two, each video stream to be processed comprises a plurality of frames of images, and each frame of image carries an acquisition time;
before the performing resolution adjustment on the video stream to be processed, the method further includes:
and synchronously processing at least two frames of images with the acquisition time in the same synchronous window in the at least two paths of video streams to be processed, wherein the at least two frames of images with the acquisition time in the same synchronous window carry the same playing time after the synchronous processing.
11. The method of any one of claims 7 to 10, further comprising:
acquiring video division information corresponding to each of the at least two resolutions;
and aiming at each path of processing video stream in the at least two paths of processing video streams, dividing the processing video stream into a plurality of paths of sub-video streams according to video dividing information corresponding to the resolution of the processing video stream.
12. A first video processing apparatus, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring video layout parameters of a terminal, and the video layout parameters are used for indicating identification information of at least two paths of video streams to be displayed required to be displayed by the terminal and the resolution of each path of video stream to be displayed;
the acquisition module is further configured to acquire the at least two video streams to be displayed according to the video layout parameters;
and the processing module is used for synthesizing the at least two video streams to be displayed into a target video stream and displaying the target video stream on the terminal.
13. The apparatus of claim 12,
the acquiring module is specifically configured to receive a video stream composition request sent by a terminal when acquiring video layout parameters of the terminal, where the video stream composition request includes the video layout parameters of the terminal.
14. The apparatus of claim 12 or 13,
the acquiring module is configured to, when acquiring the at least two to-be-displayed video streams according to the video layout parameter, specifically send a video stream acquiring request to a first service device, where the video stream acquiring request includes identification information of the at least two to-be-displayed video streams and a resolution of each to-be-displayed video stream; and receiving the at least two video streams to be displayed returned by the first service equipment.
15. The apparatus of claim 12 or 13,
the acquiring module is configured to, when acquiring the at least two to-be-displayed video streams according to the video layout parameter, specifically, acquire, for identification information of each to-be-displayed video stream in the at least two to-be-displayed video streams, a plurality of processed video streams corresponding to the identification information of the to-be-displayed video streams, where resolutions of the plurality of processed video streams are different from each other, and each processed video stream in the plurality of processed video streams and the to-be-displayed video stream have the same image content; and taking the processed video stream with the resolution which is the same as that of the video stream to be displayed in the multi-path processed video stream as the video stream to be displayed.
16. The apparatus of claim 12 or 13,
the acquiring module is configured to, when acquiring the at least two paths of video streams to be displayed according to the video layout parameter, specifically, send a sub-video stream acquiring request to a second service device for identification information of each path of video stream to be displayed in the at least two paths of video streams to be displayed, where the sub-video stream acquiring request includes the identification information and resolution of the video stream to be displayed; receiving a plurality of paths of sub-video streams which are returned by the second service equipment and correspond to the identification information and the resolution of the video stream to be displayed; and synthesizing the multi-path sub-video streams into the video stream to be displayed.
17. The apparatus according to any one of claims 12 to 16, wherein each video stream to be displayed comprises a plurality of frames of images, each frame of image carrying a playing time;
and the processing module is used for synthesizing the at least two to-be-displayed video streams into one target video stream, and is specifically used for synthesizing images with the same playing time in the at least two to-be-displayed video streams into one frame of target image, wherein all the target images form one target video stream.
18. A second video processing apparatus, comprising:
a determining module for determining at least two resolutions;
the acquisition module is used for acquiring a video stream to be processed;
the resolution adjustment module is used for adjusting the resolution of the video stream to be processed to obtain at least two paths of processed video streams; the resolutions of the at least two processed video streams are different from each other, the resolution of each processed video stream in the at least two processed video streams is the same as one of the at least two resolutions, and each processed video stream in the at least two processed video streams has the same image content as the to-be-processed video stream.
19. The apparatus of claim 18, wherein the at least two resolutions are preset.
20. The apparatus of claim 18,
the determining module is configured to, when determining at least two resolutions, specifically receive a first instruction sent by a service device, where the first instruction is used to indicate the at least two resolutions.
21. The apparatus according to any one of claims 18 to 20, wherein the number of the video streams to be processed is at least two, each video stream to be processed comprises a plurality of frames of images, and each frame of image carries an acquisition time;
the second video processing device may further include a processing module, configured to perform synchronous processing on at least two frames of images in the same synchronization window at the acquisition time in the at least two to-be-processed video streams, where the at least two frames of images in the same synchronization window at the acquisition time after the synchronous processing all carry the same playing time.
22. The apparatus of any of claims 18 to 21, wherein the second video processing apparatus further comprises a partitioning module;
the acquisition module is further configured to acquire video partition information corresponding to each of the at least two resolutions;
the dividing module is configured to, for each of the at least two processed video streams, divide the processed video stream into multiple sub-video streams according to video dividing information corresponding to a resolution of the processed video stream.
23. A first video processing apparatus, comprising a processor and a storage medium storing instructions that, when executed by the processor, cause the apparatus to perform the method of any of claims 1 to 6.
24. A secondary video processing apparatus, comprising a processor and a storage medium storing instructions that, when executed by the processor, cause the apparatus to perform the method of any of claims 7 to 11.
25. A video processing system comprising a first video processing apparatus according to any of claims 12 to 17 and a second video processing apparatus according to any of claims 18 to 22, or comprising a first video processing apparatus according to claim 23 and a second video processing apparatus according to claim 24.
26. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions which, when executed by a processor, cause the processor to carry out the method according to any one of claims 1 to 11.
CN202010076016.6A 2020-01-22 2020-01-22 Video processing method and device Pending CN113163214A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010076016.6A CN113163214A (en) 2020-01-22 2020-01-22 Video processing method and device
PCT/CN2021/071220 WO2021147702A1 (en) 2020-01-22 2021-01-12 Video processing method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010076016.6A CN113163214A (en) 2020-01-22 2020-01-22 Video processing method and device

Publications (1)

Publication Number Publication Date
CN113163214A true CN113163214A (en) 2021-07-23

Family

ID=76882048

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010076016.6A Pending CN113163214A (en) 2020-01-22 2020-01-22 Video processing method and device

Country Status (2)

Country Link
CN (1) CN113163214A (en)
WO (1) WO2021147702A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113518260A (en) * 2021-09-14 2021-10-19 腾讯科技(深圳)有限公司 Video playing method and device, electronic equipment and computer readable storage medium
CN113824920A (en) * 2021-09-30 2021-12-21 联想(北京)有限公司 Processing method and device
CN114172873A (en) * 2021-12-13 2022-03-11 中国平安财产保险股份有限公司 Resolution adjustment method, resolution adjustment device, server and computer-readable storage medium
CN114222162A (en) * 2021-12-07 2022-03-22 浙江大华技术股份有限公司 Video processing method, video processing device, computer equipment and storage medium
CN115484494A (en) * 2022-09-15 2022-12-16 云控智行科技有限公司 Method, device and equipment for processing digital twin video stream
WO2023070362A1 (en) * 2021-10-27 2023-05-04 京东方科技集团股份有限公司 Display control method and apparatus, and display device and computer-readable medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101753978A (en) * 2009-12-31 2010-06-23 中兴通讯股份有限公司 Method for realizing multi-screen business fusion and system thereof
CN202799004U (en) * 2012-06-04 2013-03-13 深圳市景阳科技股份有限公司 Video playback terminal and video playback system
CN103780920A (en) * 2012-10-17 2014-05-07 华为技术有限公司 Method and device for processing video bit-streams
CN105338424A (en) * 2015-10-29 2016-02-17 努比亚技术有限公司 Video processing method and system
CN105792021A (en) * 2014-12-26 2016-07-20 乐视网信息技术(北京)股份有限公司 Method and device for transmitting video stream
CN105872569A (en) * 2015-11-27 2016-08-17 乐视云计算有限公司 Video playing method and system, and devices
CN108134918A (en) * 2018-01-30 2018-06-08 苏州科达科技股份有限公司 Method for processing video frequency, device and multipoint video processing unit, conference facility
CN109429037A (en) * 2017-09-01 2019-03-05 杭州海康威视数字技术股份有限公司 A kind of image processing method, device, equipment and system
CN109688483A (en) * 2018-12-17 2019-04-26 北京爱奇艺科技有限公司 A kind of method, apparatus and electronic equipment obtaining video
CN110401820A (en) * 2019-08-15 2019-11-01 北京迈格威科技有限公司 Multipath video processing method, device, medium and electronic equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7197070B1 (en) * 2001-06-04 2007-03-27 Cisco Technology, Inc. Efficient systems and methods for transmitting compressed video data having different resolutions
CN101159866A (en) * 2007-06-28 2008-04-09 武汉恒亿电子科技发展有限公司 Multiple speed transmission digital video data method
CN101257607B (en) * 2008-03-12 2010-06-09 中兴通讯股份有限公司 Multiple-picture processing system and method for video conference
US20110292161A1 (en) * 2010-05-25 2011-12-01 Vidyo, Inc. Systems And Methods For Scalable Video Communication Using Multiple Cameras And Multiple Monitors
CN101977305A (en) * 2010-10-27 2011-02-16 北京中星微电子有限公司 Video processing method, device and system
US10810701B2 (en) * 2016-02-09 2020-10-20 Sony Interactive Entertainment Inc. Video display system
US10482574B2 (en) * 2016-07-06 2019-11-19 Gopro, Inc. Systems and methods for multi-resolution image stitching

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101753978A (en) * 2009-12-31 2010-06-23 中兴通讯股份有限公司 Method for realizing multi-screen business fusion and system thereof
CN202799004U (en) * 2012-06-04 2013-03-13 深圳市景阳科技股份有限公司 Video playback terminal and video playback system
CN103780920A (en) * 2012-10-17 2014-05-07 华为技术有限公司 Method and device for processing video bit-streams
CN105792021A (en) * 2014-12-26 2016-07-20 乐视网信息技术(北京)股份有限公司 Method and device for transmitting video stream
CN105338424A (en) * 2015-10-29 2016-02-17 努比亚技术有限公司 Video processing method and system
CN105872569A (en) * 2015-11-27 2016-08-17 乐视云计算有限公司 Video playing method and system, and devices
CN109429037A (en) * 2017-09-01 2019-03-05 杭州海康威视数字技术股份有限公司 A kind of image processing method, device, equipment and system
CN108134918A (en) * 2018-01-30 2018-06-08 苏州科达科技股份有限公司 Method for processing video frequency, device and multipoint video processing unit, conference facility
CN109688483A (en) * 2018-12-17 2019-04-26 北京爱奇艺科技有限公司 A kind of method, apparatus and electronic equipment obtaining video
CN110401820A (en) * 2019-08-15 2019-11-01 北京迈格威科技有限公司 Multipath video processing method, device, medium and electronic equipment

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113518260A (en) * 2021-09-14 2021-10-19 腾讯科技(深圳)有限公司 Video playing method and device, electronic equipment and computer readable storage medium
CN113824920A (en) * 2021-09-30 2021-12-21 联想(北京)有限公司 Processing method and device
WO2023070362A1 (en) * 2021-10-27 2023-05-04 京东方科技集团股份有限公司 Display control method and apparatus, and display device and computer-readable medium
CN114222162A (en) * 2021-12-07 2022-03-22 浙江大华技术股份有限公司 Video processing method, video processing device, computer equipment and storage medium
CN114222162B (en) * 2021-12-07 2024-04-12 浙江大华技术股份有限公司 Video processing method, device, computer equipment and storage medium
CN114172873A (en) * 2021-12-13 2022-03-11 中国平安财产保险股份有限公司 Resolution adjustment method, resolution adjustment device, server and computer-readable storage medium
CN114172873B (en) * 2021-12-13 2023-05-30 中国平安财产保险股份有限公司 Resolution adjustment method, resolution adjustment device, server and computer readable storage medium
CN115484494A (en) * 2022-09-15 2022-12-16 云控智行科技有限公司 Method, device and equipment for processing digital twin video stream
CN115484494B (en) * 2022-09-15 2024-04-02 云控智行科技有限公司 Digital twin video stream processing method, device and equipment

Also Published As

Publication number Publication date
WO2021147702A1 (en) 2021-07-29

Similar Documents

Publication Publication Date Title
CN113163214A (en) Video processing method and device
US11632571B2 (en) Media data processing method and apparatus
US20150208103A1 (en) System and Method for Enabling User Control of Live Video Stream(s)
CN112073648B (en) Video multi-picture synthesis method and device, computer equipment and storage medium
KR20160079357A (en) Method for sending video in region of interest from panoramic-video, server and device
US20180249047A1 (en) Compensation for delay in ptz camera system
US20190238933A1 (en) Video stream transmission method and related device and system
US11539983B2 (en) Virtual reality video transmission method, client device and server
CN110035316B (en) Method and apparatus for processing media data
US20200145736A1 (en) Media data processing method and apparatus
US11290752B2 (en) Method and apparatus for providing free viewpoint video
US10728583B2 (en) Multimedia information playing method and system, standardized server and live broadcast terminal
US20220182687A1 (en) Method for providing and method for acquiring immersive media, apparatus, device, and storage medium
CN111343415A (en) Data transmission method and device
KR20190038134A (en) Live Streaming Service Method and Server Apparatus for 360 Degree Video
KR20180038256A (en) Method, and system for compensating delay of virtural reality stream
CN108810567B (en) Audio and video visual angle matching method, client and server
CN110741648A (en) Transmission system for multi-channel portrait and control method thereof, multi-channel portrait playing method and device thereof
WO2023029252A1 (en) Multi-viewpoint video data processing method, device, and storage medium
CN108574881B (en) Projection type recommendation method, server and client
CN113905186B (en) Free viewpoint video picture splicing method, terminal and readable storage medium
CN114866829A (en) Synchronous playing control method and device
US20240107110A1 (en) Changing video tracks in immersive videos
CN117014723A (en) Video data transmission method, terminal, network device, system and electronic device
CN115086696A (en) Video playing control method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210723

RJ01 Rejection of invention patent application after publication