WO2021147702A1 - 一种视频处理方法及其装置 - Google Patents

一种视频处理方法及其装置 Download PDF

Info

Publication number
WO2021147702A1
WO2021147702A1 PCT/CN2021/071220 CN2021071220W WO2021147702A1 WO 2021147702 A1 WO2021147702 A1 WO 2021147702A1 CN 2021071220 W CN2021071220 W CN 2021071220W WO 2021147702 A1 WO2021147702 A1 WO 2021147702A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
video stream
displayed
processed
streams
Prior art date
Application number
PCT/CN2021/071220
Other languages
English (en)
French (fr)
Inventor
郑洛
王志兵
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021147702A1 publication Critical patent/WO2021147702A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234363Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • This application relates to the field of multimedia technology, and in particular to a video processing method and device.
  • the scene can be photographed by cameras placed at different positions or angles, that is, multi-camera photography.
  • multi-camera shooting you can get a more comprehensive and clear understanding of the scene.
  • the director selects a picture from the pictures taken by different camera positions and pushes the picture to the terminal for display, which makes the terminal unable to display multiple pictures at the same time.
  • the embodiments of the present application provide a video processing method and device, which are conducive to displaying a target video stream synthesized from at least two video streams to be displayed in a terminal.
  • the picture presented in the terminal is composed of at least two sub The pictures are spliced together.
  • an embodiment of the present application provides a video processing method, which is applied to a first video processing device, and the method includes: obtaining a video layout parameter of a terminal, and the video layout parameter is used to indicate that the terminal needs to display The identification information of the at least two to-be-displayed video streams and the resolution of each to-be-displayed video stream; the aforementioned at least two to-be-displayed video streams are obtained according to the video layout parameters; the at least two to-be-displayed video streams are combined into one Target video stream, and display the target video stream on the terminal.
  • the terminal since the target video stream is a video stream, the terminal only needs to perform the decapsulation operation once and the terminal only needs one video player to achieve the purpose of displaying multiple sub-pictures.
  • a specific implementation manner for obtaining the video layout parameters of the terminal may be: receiving a video stream composition request sent by the terminal, where the video stream composition request includes the video layout parameters of the terminal.
  • the identification information or the resolution of the video stream to be displayed that the terminal needs to display or the resolution can be changed, and the first video processing device can change according to
  • the video layout parameters sent by the terminal obtain the to-be-displayed video stream that the terminal currently needs to display, thereby helping to better meet the needs of the terminal user.
  • a specific implementation manner of obtaining the aforementioned at least two video streams to be displayed may be: sending a video stream obtaining request to the first service device, and the video stream obtaining request includes the aforementioned at least two video streams.
  • the number of first service devices may be multiple, and different to-be-displayed video streams may come from different first service devices.
  • the different to-be-displayed video streams obtained by the first video processing device may come from different first service devices.
  • different to-be-displayed video streams can be obtained in parallel from different first service devices, thereby helping to improve the efficiency of obtaining at least two to-be-displayed video streams that need to be displayed.
  • a specific implementation manner of obtaining the aforementioned at least two to-be-displayed video streams may be: for the identification information of each of the aforementioned at least two to-be-displayed video streams, obtaining The multiplexed video stream corresponding to the identification information of the video stream to be displayed, the resolution of the multiplexed video stream is different from each other, and each processed video stream in the multiplexed video stream is the same as the video stream to be displayed The image content of the multiplexed video stream; the processed video stream with the same resolution as the to-be-displayed video stream is used as the to-be-displayed video stream.
  • the multiplexed video stream can be stored in a local database.
  • a specific implementation manner for obtaining the aforementioned at least two video streams to be displayed may be: for the identification information of each video stream to be displayed in the aforementioned at least two video streams to be displayed,
  • the second service device sends a sub-video stream acquisition request, the sub-video stream acquisition request includes the identification information and resolution of the to-be-displayed video stream; receiving the identification information and resolution of the to-be-displayed video stream returned by the second service device Corresponding multiple sub video streams; synthesize the multiple sub video streams into the to-be-displayed video stream.
  • the multiple sub-video streams can be synthesized to obtain a complete to-be-displayed video stream. Then the synthesized at least two to-be-displayed video streams can be synthesized into a target video stream that the user wants to display on the terminal, and the picture presented in the terminal when the target video stream is displayed is formed by splicing at least two sub-pictures. In this way, the user can watch at least two sub-screens in the terminal at the same time.
  • a specific implementation manner for obtaining the aforementioned at least two video streams to be displayed may be: for the identification information of each video stream to be displayed in the aforementioned at least two video streams to be displayed,
  • the third service device sends an index acquisition request carrying the identification information, and receives the index of the multiplexed video stream corresponding to the identification information and the resolution of each processed video stream returned by the third service device;
  • Determine the target index in the index of the processed video stream, and the resolution of the processed video stream corresponding to the target index is the same as the resolution of the video stream to be displayed; send a stream acquisition request carrying the target index to the third service device, and receive The processed video stream corresponding to the target index returned by the third service device, and the processed video stream corresponding to the target index is used as the to-be-displayed video stream.
  • the index of each processing video stream and each index are obtained.
  • the resolution of the corresponding processing video stream to determine the video stream to be displayed can reduce the amount of data transmitted between the first video processing device and the third service device.
  • each video stream to be displayed includes multiple frames of images, and each frame of image carries playback time;
  • the specific implementation manner of synthesizing the aforementioned at least two video streams to be displayed into one target video stream may be: Images with the same playing time in the two video streams to be displayed are synthesized into a target image, and all target images form a target video stream.
  • the images with the same playback time are the images collected at the same time. In this way, it can be ensured that the frames of images that make up the target image are collected at the same time, that is, when the target video stream is displayed in the terminal, multiple sub-pictures displayed at the same time in the terminal are at the same time. Picture.
  • the aforementioned at least two to-be-displayed video streams may include a first to-be-displayed video stream and a second to-be-displayed video stream; if the resolution of the first to-be-displayed video stream is higher than that of the second to-be-displayed video stream Resolution, the display area occupied by the first video stream to be displayed in the terminal may be larger than the display area occupied by the second video stream to be displayed in the terminal.
  • the resolution of a video stream that occupies a larger display area in the terminal can be made higher, that is, a video stream that occupies a larger display area in the terminal is clearer.
  • the embodiments of the present application provide another video processing method, which is applied to a second video processing device, and the method includes: determining at least two resolutions, obtaining a video stream to be processed; The resolution of the stream is adjusted to obtain at least two processed video streams; wherein the resolutions of the at least two processed video streams are different from each other, and the resolution of each processed video stream of the at least two processed video streams is at least One of the two resolutions is the same, and each of the at least two processed video streams has the same image content as the to-be-processed video stream.
  • the resolution of the video stream to be processed is adjusted to obtain at least two processed video streams with different resolutions but the same image content, which is beneficial to better adapt to the terminal’s display of the video stream. Resolution requirements.
  • the method may further include: sending the foregoing at least two processed video streams to one or more first service devices, each of which exists At least one way to process the video stream.
  • the aforementioned at least two resolutions are preset.
  • a specific implementation manner for determining the at least two resolutions may be: receiving a first instruction sent by the service device, where the first instruction is used to instruct the aforementioned at least two resolutions.
  • the number of the to-be-processed video streams is at least two, and each of the to-be-processed video streams includes multiple frames of images, and each frame of image carries the acquisition time; the method may further include: At least two frames of images whose acquisition time is within the same synchronization window in the video stream are processed for synchronization, and after synchronization processing, at least two frames of images whose acquisition time is within the same synchronization window carry the same playback time.
  • the method may further include: acquiring video division information corresponding to each of the aforementioned at least two resolutions; for each of the aforementioned at least two processing video streams, according to the and The video division information corresponding to the resolution of the processed video stream divides the processed video stream into multiple sub-video streams.
  • a complete processed video stream can be made up of multiple sub-video streams, and different sub-video streams that make up the same processed video stream can be sent to multiple sub-video streams.
  • the second service equipment In this way, when the terminal needs to display the processed video stream, it can obtain different sub-video streams composing the processed video stream from different second service devices in parallel, thereby helping to improve the efficiency of obtaining the processed video stream.
  • the method may further include: sending the multiple sub-video streams to one or more second service devices, each of the second service devices There is at least one sub-video stream.
  • an embodiment of the present application provides a first video processing device, which is a first video processing device or a device (such as a chip) having the function of the first video processing device.
  • the device has the function of realizing the video processing method provided in the first aspect, and the function is realized by hardware or by hardware executing corresponding software.
  • the hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • an embodiment of the present application provides a second video processing device, which is a second video processing device or a device (such as a chip) having the function of the second video processing device.
  • the device has the function of realizing the video processing method provided by the second aspect, and the function is realized by hardware or by hardware executing corresponding software.
  • the hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • an embodiment of the present application provides another first video processing device, which is a first video processing device or a device (such as a chip) having the function of the first video processing device.
  • the device includes a processor and a storage medium.
  • the storage medium stores instructions. When the instructions are executed by the processor, the device realizes the video processing method provided in the first aspect.
  • an embodiment of the present application provides another second video processing device.
  • the device is a second video processing device or a device (such as a chip) with the function of a second video processing device.
  • the device includes a processor and a storage medium, An instruction is stored in the storage medium, and when the instruction is executed by the processor, the device realizes the video processing method provided in the second aspect.
  • an embodiment of the present application provides a video processing system.
  • the video processing system includes the first video processing device described in the third aspect and the second video processing device described in the fourth aspect, or the video processing system It includes the first video processing device described in the fifth aspect and the second video processing device described in the sixth aspect.
  • an embodiment of the present application provides a computer-readable storage medium for storing computer program instructions used by the first video processing device described in the third aspect, which includes all instructions for executing the method of the first aspect. The procedures involved.
  • an embodiment of the present application provides a computer-readable storage medium for storing computer program instructions used by the second video processing device described in the fourth aspect, which includes all the instructions for executing the method of the second aspect. The procedures involved.
  • an embodiment of the present application provides a computer program product.
  • the program product includes a program.
  • the program When the program is executed by a first video processing device, the device implements the method described in the first aspect.
  • an embodiment of the present application provides a computer program product.
  • the program product includes a program.
  • the program When the program is executed by a second video processing device, the device implements the method described in the second aspect.
  • FIG. 1 is a schematic diagram of the architecture of a video processing system disclosed in an embodiment of the present application
  • Fig. 2a is a schematic flowchart of a video processing method disclosed in an embodiment of the present application
  • FIG. 2b is a schematic diagram of a scene where the resolution of an image to be processed in a video stream to be processed is adjusted according to an embodiment of the present application;
  • Fig. 3a is a schematic flowchart of another video processing method disclosed in an embodiment of the present application.
  • FIG. 3b is a schematic diagram of a scene for performing synchronization processing on image 1, image 2 and image 3 disclosed in an embodiment of the present application;
  • FIG. 4a is a schematic flowchart of another video processing method disclosed in an embodiment of the present application.
  • Fig. 4b is a schematic diagram of a scene for dividing processing video streams disclosed in an embodiment of the present application.
  • Fig. 5a is a schematic flowchart of another video processing method disclosed in an embodiment of the present application.
  • FIG. 5b is a schematic diagram of a scene in which a video stream to be displayed 1, a video stream to be displayed 2, and a video stream to be displayed 3 are combined into a target video stream according to an embodiment of the present application;
  • FIG. 6 is a schematic flowchart of another video processing method disclosed in an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a first video processing device disclosed in an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of another first video processing device disclosed in an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a second video processing device disclosed in an embodiment of the present application.
  • Fig. 10 is a schematic structural diagram of another second video processing device disclosed in an embodiment of the present application.
  • Resolution Also known as resolution, resolution, resolution can be subdivided into display resolution, image resolution, printing resolution, scanning resolution, etc.
  • display resolution also called screen resolution
  • display resolution refers to how many pixels the display can display.
  • the display resolution is fixed, the smaller the display screen, the clearer the image.
  • the display size is fixed, the higher the display resolution, the clearer the image.
  • Image resolution can refer to the number of pixels contained in a unit inch. The resolution mentioned in the embodiments of this application may refer to the image resolution.
  • the resolution can be expressed by the number of pixels in each direction.
  • the resolution of image 1 is 640x480, which means: image 1 has 640 pixels in the width direction, and image 1 has 480 pixels in the height direction.
  • the resolution can also be expressed in pixels per inch (ppi) and the width and height of the image.
  • the resolution of image 2 is 72ppi and 8x6 inches means: the width of image 2 is 8 inches, the height is 6 inches, and each inch includes 72 pixels. It should be noted that the embodiments of the present application do not limit the format adopted for the resolution.
  • FIG. 1 is a schematic structural diagram of a video processing system disclosed in an embodiment of the present application.
  • the video processing system includes: multiple video capture devices 101, a second video processing device 102, a service device 103, a first video processing device 104, and a terminal device 105.
  • each video capture device 101 may be used to capture a video stream to be processed, and send the captured video stream to be processed to the second video processing device 102.
  • the to-be-processed video streams captured by different video capture devices 101 are different. As shown in FIG. 1, the to-be-processed video stream 1 captured by one video capture device 101 and the to-be-processed video stream 2 captured by the other video capture device 101 are different. different. Different video streams to be processed may refer to different image content included in the video streams to be processed. It is understandable that the to-be-processed video stream received by the second video processing device 102 may be a video stream suitable for network transmission obtained after being encoded by the video capture device 101.
  • the second video processing device 102 may be used to obtain at least two resolutions, and adjust the resolution of each (decoded) video stream to be processed according to the at least two resolutions. After adjusting the resolution of each video stream to be processed, at least two processed video streams with different resolutions can be obtained, and each processed video stream has the same image content as the video stream to be processed.
  • the number of processed video streams obtained after adjusting the resolution of each video stream to be processed may be the same as the number of the aforementioned at least two types of resolutions, and the resolution of each processed video stream in the obtained processed video stream may be the same as One of the aforementioned at least two resolutions is the same.
  • the second video processing device 102 may send at least two processed video streams corresponding to each to-be-processed video stream to the service device 103.
  • the video processing method disclosed in the embodiment of the present application may be applied to a live broadcast scene or a non-live broadcast scene
  • the service device 103 in FIG. 1 may be a storage device or a distribution device.
  • the service device in FIG. 1 may be a distribution device, and the distribution device may be used to receive at least two processed video streams corresponding to each video stream to be processed.
  • the service device 103 in FIG. 1 may be a storage device, and the storage device may be used to associate the identification information of each to-be-processed video stream with at least two processed video streams corresponding to the to-be-processed video stream. Perform associative storage.
  • the terminal device 105 can simultaneously display multiple video streams in its display device.
  • the terminal device 105 can be triggered to generate a video stream synthesis request through user operation.
  • the video stream composition request may include the video layout parameters of the terminal device 105, and the video layout parameters may be used to indicate the identification information of the at least two to-be-displayed video streams that the terminal device 105 needs to display and the resolution of each to-be-displayed video stream. .
  • the terminal device 105 may send the video stream synthesis request to the first video processing device 104.
  • the first video processing device 104 may send a video stream acquisition request to the service device 103 to request to acquire at least two to-be-displayed video streams that the terminal device 105 needs to display.
  • the distribution device may include a central distribution device and multiple edge distribution devices, and the central distribution device may be used to receive at least one video stream corresponding to each channel to be processed sent by the second video processing device 102. Two-channel processing video streams, and at least two-channel processing video streams corresponding to each to-be-processed video stream are sent to each edge distribution device.
  • the edge distribution device may be used to respond to the video stream acquisition request sent by the first video processing device 104 nearby.
  • the central distribution device may be an origin server in a content delivery network (CDN), and the edge distribution device may be a cache server in a CDN.
  • CDN content delivery network
  • the first video processing device 104 may synthesize the at least two to-be-displayed video streams into one target video stream, and send the target video stream to the terminal device 105, To display the target video stream on the terminal device 105.
  • the target video stream displayed on the terminal device 105 is composed of at least two to-be-displayed video streams, and the picture presented by the terminal device 105 when the target video stream is displayed is spliced by at least two sub-pictures. Therefore, the user can watch multiple sub-screens in the terminal device 105 at the same time.
  • the video capture device 101 may be an entity with a video capture function, for example, a camera, a video camera, a camera, a scanner, or other devices with a video capture function (mobile phone, tablet computer, etc.).
  • the display device may be a display screen with an image output function.
  • the video processing system shown in FIG. 1 may also include a sound collection device corresponding to each video collection device.
  • Both the second video processing device 102 and the first video processing device 104 may be composed of a processor, a memory, and a network interface. Specifically, both the second video processing device 102 and the first video processing device 104 may be servers.
  • the terminal device 105 may be an entity on the user side for receiving or transmitting signals, such as a mobile phone.
  • the terminal device may also be called a terminal (terminal), user equipment (UE), mobile station (mobile station, MS), mobile terminal (mobile terminal, MT), and so on.
  • the terminal device can be a mobile phone (mobile phone), smart TV, wearable device, tablet computer (Pad), computer with wireless transceiver function, virtual reality (virtual reality, VR) terminal device, augmented reality (augmented reality, AR) terminal Equipment, wireless terminals in industrial control, wireless terminals in self-driving, wireless terminals in remote medical surgery, wireless terminals in smart grid, transportation Wireless terminals in transportation safety, wireless terminals in smart cities, wireless terminals in smart homes, and so on.
  • the embodiments of the present application do not limit the specific technology and specific device form adopted by the terminal device.
  • both the second video processing device 102 and the first video processing device 104 in FIG. 1 are used as independent devices for example only, and do not constitute a limitation to the embodiment of the present application.
  • the second video processing device 102 may be integrated in the video capture device 101 or integrated in the service device 103.
  • the first video processing device 104 may be integrated in the terminal device 105 or integrated in the service device 103.
  • the steps performed by the second video processing device 102 may be performed by the video capture device 101 or the service device 103 instead, and the steps performed by the first video processing device 104 may be performed by the terminal device 105 or the service device 103 instead.
  • the video processing system shown in FIG. 1 includes two video capture devices 101 only for example, and does not constitute a limitation to the embodiment of the present application. In other feasible implementation manners, the video processing system may include more than two video capture devices.
  • FIG. 2a is a schematic flowchart of a video processing method provided by an embodiment of the present application.
  • This method describes in detail how to adjust the resolution of the video stream to be processed to obtain at least two processed video streams with different resolutions and the same image content.
  • the execution subject of steps S201 to S203 is the second video processing device, or a chip in the second video processing device.
  • the following takes the second video processing device as the execution subject of the video processing method as an example for description.
  • the method may include but is not limited to the following steps:
  • Step S201 The second video processing device determines at least two resolutions.
  • the at least two resolutions determined by the second video processing device may be the resolution supported by the terminal when the video stream is displayed, or may be the resolution desired by the user when the video stream is displayed in the terminal.
  • the at least two resolutions determined by the second video processing device may be different from each other.
  • the aforementioned at least two resolutions may be preset.
  • the second video processing device may preset the aforementioned at least two resolutions according to user operations.
  • the second video processing device may receive a first instruction sent by the service device, and the first instruction may be used to indicate the aforementioned at least two resolutions.
  • the first video processing device sends a video stream acquisition request to the service device to request to acquire at least two to-be-displayed video streams that the terminal device needs to display.
  • the video stream acquisition request may include the display required by the terminal device. The resolution of each of the at least two to-be-displayed video streams.
  • the service device After the service device receives the video stream acquisition request from the first video processing device, if it determines that the resolution in the video stream acquisition request is the same as the resolution in the video stream acquisition request (from the first video processing device) received last time If the resolution is different, the service device may send the aforementioned first instruction to the second video processing device.
  • the service device can receive video stream acquisition requests sent by multiple first video processing devices, and the service device can change the resolution of most of the received video stream acquisition requests in most video stream acquisition requests. In the case of sending the aforementioned first instruction to the second video processing device.
  • the first video processing device sends the video stream acquisition request to the service device after receiving the video stream synthesis request from the terminal device.
  • both the video stream synthesis request and the video stream acquisition request may include the identification of the terminal device. If the identification of the terminal device is a preset device identification, the service device may send the foregoing to the second video processing device.
  • the first instruction may be a preset identification of a terminal device that has the authority to adjust the resolution.
  • the service device sends the aforementioned first instruction to the second video processing device only when it determines that the terminal device's identifier is the preset device identifier, which can avoid frequently sending the aforementioned first instruction to the second video processing device. Accordingly, It is beneficial to reduce the probability that the second video processing device receives multiple first instructions in a short time, thereby helping to avoid the situation that the second video processing device frequently re-determines the resolution.
  • the second video processing device determines at least two resolutions for adjusting the resolution of the video stream to be processed.
  • Step S202 The second video processing device obtains the to-be-processed video stream.
  • the number of video streams to be processed can be one or multiple.
  • Each of the multiple to-be-processed video streams may have different image content.
  • the multiple to-be-processed video streams may be different video streams collected from different perspectives in the same scene, or the multiple to-be-processed video streams may be different video streams collected in different scenes at the same time.
  • the multiple to-be-processed video streams may be sent by the same device to the second video processing device, and each of the multiple to-be-processed video streams may be connected to the device. Collected by different video capture equipment. Different video capture devices connected to the device can be used to capture video streams from different perspectives of the same site, or different video capture devices connected to the device can be used to capture video streams of different sites at the same time. Among them, the device can be connected to the video capture device through a physical connection or a logical connection.
  • the multiple to-be-processed video streams may be composed of to-be-processed video streams sent by at least two devices to the second video processing device.
  • the two of the video streams to be processed can come from the same device, and the other video stream to be processed can come from another device.
  • the second video processing device can obtain multiple to-be-processed video streams from a local database, and the to-be-processed video streams stored in the local database can be collected by a video capture device connected to the second video processing device. get.
  • Step S203 The second video processing device adjusts the resolution of the to-be-processed video stream to obtain at least two processed video streams; wherein the resolutions of the at least two processed video streams are different from each other, and in the at least two processed video streams
  • the resolution of each processed video stream is the same as one of the aforementioned at least two resolutions, and each processed video stream of the at least two processed video streams has the same image content as the to-be-processed video stream.
  • the second video processing device may adjust the resolution of the to-be-processed video stream according to at least two resolutions to obtain at least two processed video streams with different resolutions.
  • the number of processed video streams obtained after adjusting the resolution of the to-be-processed video stream may be the same as the number of types of resolutions determined by the second video processing device.
  • the resolution of each processed video stream can be the same as one of the aforementioned at least two resolutions.
  • the two resolutions determined by the second video processing device are 500x500 and 1000x1000
  • two processed video streams can be obtained, of which the resolution of one processed video stream can be 500x500, and the other
  • the resolution of the video stream processed by one channel can be 1000x1000.
  • the to-be-processed video stream includes multiple frames of images, and the resolution of each frame of the image in the same to-be-processed video stream is the same, and the resolution of each frame of image is the resolution of the to-be-processed video stream.
  • the meaning of adjusting the resolution of the to-be-processed video stream may be: adjusting the resolution of each frame of the image in the to-be-processed video stream.
  • each of the at least two processed video streams obtained after resolution adjustment of the to-be-processed video stream may have the same image content as the to-be-processed video stream.
  • the processed video stream 1 and the processed video stream 2 are obtained after the resolution of the to-be-processed video stream is adjusted, and the processed video stream 1 and the processed video stream 2 both include 3 frames of images
  • the first frame of image in the processed video stream 1 and the processed video stream 2 can be the same as the image content of the first frame in the to-be-processed video stream.
  • the second frame of the processed video stream 1 and the processed video stream 2 The frame image can be the same as the image content of the second frame image in the video stream to be processed, and the third frame image in the processed video stream 1 and processed video stream 2 can be the same as the image of the third frame image in the processed video stream.
  • the content is the same.
  • processing the first frame of image in video stream 1 and the image content of the first frame of image in the to-be-processed video stream have the same meaning as: when the first frame of image in processed video stream 1 is displayed, the display device is presented The picture is the same as the picture presented on the display device when the first frame of the image in the to-be-processed video stream is displayed.
  • the two resolutions determined by the second video processing device are 500x500 and 1000x1000
  • a schematic diagram of a scene for adjusting the resolution of the image to be processed in the video stream to be processed may be as shown in FIG. 2b.
  • different terminals may have different resolution requirements when displaying processed video streams with the same image content in different terminals.
  • one terminal wants to display the processed video stream that includes the processed image 1 in Figure 2b
  • the other terminal wants to display the processed video stream that includes the processed image 2 in Figure 2b, that is, the resolution of the processed video stream that one of the terminals wants to display
  • the rate is 1000x1000
  • the resolution of the processed video stream that the other terminal wants to display is 500x500.
  • the terminal may also have different resolution requirements. Therefore, by adjusting the resolution of the video stream to be processed to obtain at least two processed video streams with different resolutions but the same image content, it is beneficial to better adapt to the terminal's requirements for the resolution of the displayed video stream.
  • the second video processing device may send at least two processed video streams corresponding to the to-be-processed video stream to one or more
  • the first service device each first service device has at least one processed video stream corresponding to the to-be-processed video stream, in other words, each first service device has all (or part) of the processed video corresponding to the to-be-processed video stream flow.
  • the first service device may be a storage device or a distribution device.
  • At least two processed video streams corresponding to each to-be-processed video stream can be sent to the source server in the content delivery network (CDN), and then the source server corresponds to the to-be-processed video stream.
  • At least two processed video streams are distributed to multiple cache servers, that is, at least two processed video streams corresponding to the to-be-processed video stream can be stored in each cache server.
  • the nearby cache server can respond to the user's request, that is, obtain the required video stream from the nearby cache server.
  • different video streams can also be obtained from multiple cache servers in close proximity to form at least two video streams to be played.
  • the second video processing device may encapsulate each processed video stream, and then send the encapsulated processed video stream to the first service device.
  • the first service device may decapsulate the received encapsulated processed video stream, or may not decapsulate it.
  • the processed video stream existing in the first service device may be a decapsulated video stream or a encapsulated video stream.
  • the resolution of the video stream to be processed is adjusted to obtain at least two processed video streams with different resolutions but the same image content, which is beneficial to better adapt to the terminal's display of the video stream.
  • the resolution requirements are provided.
  • FIG. 3a is a schematic flowchart of another video processing method provided by an embodiment of the present application.
  • This method describes in detail how to perform synchronization processing on at least two frames of images whose acquisition time is within the same synchronization window in the aforementioned at least two video streams to be processed, so that at least two frames of images whose acquisition time is within the same synchronization window after synchronization processing are both Carry the same playing time.
  • the execution subject of steps S301 to S304 is the second video processing device, or the chip in the second video processing device.
  • the following takes the second video processing device as the execution subject of the video processing method as an example for description.
  • the method may include but is not limited to the following steps:
  • Step S301 The second video processing device determines at least two resolutions.
  • Step S302 The second video processing device acquires at least two video streams to be processed, each video stream to be processed includes multiple frames of images, and each frame of image carries a collection time.
  • each of the to-be-processed video streams acquired by the second video processing device may include multiple frames of images, and each frame of image may carry its own collection time.
  • the collection time may represent the system time of the video collection device when the image is collected by the video collection device. In actual situations, the system time of the video capture device may deviate from the actual time, which may result in the capture time carried by each frame of the image captured by the video capture device may not be the time when the image is actually captured.
  • the second video processing device can determine the deviation time of the acquired video capture device corresponding to each video stream to be processed, and collect each frame of image in the video stream to be processed according to the deviation time Adjust the time, and the acquisition time carried by each frame of image after adjustment is the time when the image is actually acquired.
  • the adjusted acquisition time of the image may be obtained by superimposing the deviation time of the video acquisition device that collected the image on the acquisition time before adjustment.
  • the system time of the video capture device can also be calibrated through the actual time, so that the system time of the video capture device is consistent with the actual time.
  • calibrating the system time of the video capture device it can be ensured that the capture time of each frame of the image in the to-be-processed video stream captured by the calibrated video capture device is the actual capture time of the image, thereby avoiding processing
  • the acquisition time of each frame of image in the video stream is adjusted.
  • the system time of different video capture devices may be different at the same time. Therefore, the collection time carried by the images collected by different video collection devices at the same time can be different.
  • the system time of each video capture device can be calibrated separately through the actual time to ensure that the system time of each video capture device is consistent with the actual time.
  • step S301 to step S302 please refer to the specific description of step S201 to step S202 in FIG. 2a respectively, which will not be repeated here.
  • Step S303 The second video processing device performs synchronization processing on at least two frames of images whose acquisition time is within the same synchronization window in the aforementioned at least two to-be-processed video streams. After synchronization processing, the acquisition time of at least two frames of images within the same synchronization window are both Carry the same playing time.
  • the second video processing device after the second video processing device acquires at least two to-be-processed video streams, it can determine whether the images in each to-be-processed video stream are at the same time according to the collection time carried by the images in each to-be-processed video stream Under the collection. If the acquisition time of each frame of the to-be-processed video stream 1 is the same as the acquisition time of each frame of the to-be-processed video stream 2, it means that the to-be-processed video stream 1 and the to-be-processed video stream 2 are in the same frame. Obtained under time.
  • the video capture device transmits the captured video stream to be processed to the second video processing device via a network or other means
  • the capture time carried by the image in the to-be-processed video stream may change.
  • the carried images with the same acquisition time may not actually be acquired at the same time, and the carried images with different acquisition times may actually be acquired at the same time.
  • the second video processing device may determine that at least two frames of images in the acquired at least two to-be-processed video streams whose acquisition time is within the same synchronization window are acquired at the same time.
  • the duration of the synchronization window may be less than the duration of the image acquisition interval
  • the duration of the image acquisition interval may be the duration of the interval between the video acquisition device acquiring two adjacent frames of images, that is, the reciprocal of the frame rate of the video acquisition device.
  • the image capture interval is about 0.0417 seconds, that is, one frame of image can be captured during the image capture time period of 0.0417 seconds.
  • the acquisition time of at least two frames of the aforementioned at least two video streams to be processed is within the same synchronization window, which can indicate that the aforementioned at least two video streams to be processed are within the same synchronization window.
  • At least two frames of images in the stream are actually acquired at the same time.
  • the second video processing device can perform synchronization processing on at least two frames of images whose acquisition time is within the same synchronization window in the aforementioned at least two to-be-processed video streams. After the synchronization processing, each frame of image can carry playback time, and the aforementioned At least two frames of images whose acquisition time is in the same synchronization window carry the same playback time.
  • the playback time of at least two frames of images simultaneously displayed in the terminal is the same, by performing synchronization processing on at least two frames of images in the aforementioned at least two to-be-processed video streams whose acquisition time is within the same synchronization window, that is, Synchronize at least two frames of images that are actually collected at the same time, so that at least two frames of images in the same synchronization window carry the same playback time, which is conducive to displaying the same time on the terminal at the same time.
  • the duration of the synchronization window is less than the duration of the image collection interval, which can avoid synchronizing two adjacent frames of images collected before and after.
  • the play time may be a presentation time stamp (PTS) in the digital video compression format H264.
  • the acquisition time carried by the image 1 in the to-be-processed video stream 1 acquired by the second video processing device is 00:10 (seconds: milliseconds), and the image in the to-be-processed video stream 2 2
  • the acquisition time carried by 2 is 00:20
  • the acquisition time carried by image 3 in the video stream 3 to be processed is 00:30
  • the schematic diagram of the scene of synchronous processing of image 1, image 2 and image 3 can be shown in Figure 3b .
  • the gray filled polygon represents the image in the to-be-processed video stream
  • the time axis represents the acquisition time carried by the image in the to-be-processed video stream acquired by the second video processing device (that is, the acquisition time changed after transmission) .
  • the synchronization window is a time period of 30ms centered on the acquisition time carried by image 2. It can be seen from Figure 3b that the acquisition time carried by image 1, image 2 and image 3 are all within the same synchronization window.
  • the second video processing device can use the acquisition time of image carrying 2 as image 1, image 2 and image 3 ( Figure 3b (not shown) play time.
  • the second video processing device may use the center time of the synchronization window as the playback time of at least two frames of images whose acquisition time is within the synchronization window.
  • the synchronization window in FIG. 3b is centered on the acquisition time carried by the image 2, and the time period of 30 ms is only for example, and does not constitute a limitation to the embodiment of the present application.
  • the to-be-processed video stream (such as the to-be-processed video stream 1, the to-be-processed video stream 2, and the to-be-processed video stream 3) in FIG. 3b may also include other images.
  • the synchronization processing process of other images in the video stream 1 to be processed except for the image 1 can be the same as the synchronization processing process of the image 1 except for a different synchronization window.
  • the second video processing device may determine the time period occupied by the current synchronization window according to the time period occupied by the last synchronization window, and then perform a comparison of the aforementioned at least two standby channels according to the time period occupied by the current synchronization window and the central time Process at least two frames of images in the video stream whose acquisition time is within the current synchronization window for synchronization processing.
  • the duration of each synchronization window can be the same, and the end time of the previous synchronization window can be the start time of the current synchronization window.
  • Step S304 For each of the aforementioned at least two to-be-processed video streams, the second video processing device adjusts the resolution of the to-be-processed video stream to obtain at least two processed video streams; among them, at least two The resolutions of the processed video streams are different from each other, the resolution of each of the at least two processed video streams is the same as one of the aforementioned at least two resolutions, and the resolution of the at least two processed video streams is the same Each processed video stream has the same image content as the to-be-processed video stream.
  • step S304 refers to the specific description of step S203 in FIG. 2a, which will not be repeated here.
  • FIG. 4a is a schematic flowchart of another video processing method provided by an embodiment of the present application. This method describes in detail how to divide the processed video stream into multiple sub-video streams.
  • the execution subject of steps S401 to S405 is the second video processing device, or is a chip in the second video processing device, and the following takes the second video processing device as the execution subject of the video processing method as an example for description.
  • the method may include but is not limited to the following steps:
  • Step S401 The second video processing device determines at least two resolutions.
  • Step S402 The second video processing device obtains the to-be-processed video stream.
  • Step S403 The second video processing device adjusts the resolution of the to-be-processed video stream to obtain at least two processed video streams; wherein the resolutions of the at least two processed video streams are different from each other, and the at least two processed video streams
  • the resolution of each processed video stream is the same as one of the aforementioned at least two resolutions, and each processed video stream of the at least two processed video streams has the same image content as the to-be-processed video stream.
  • step S401 to step S403 please refer to the specific description of step S201 to step S203 in FIG. 2a, respectively, which will not be repeated here.
  • Step S404 The second video processing device acquires video division information corresponding to each of the aforementioned at least two resolutions.
  • the second video processing device can also acquire video division information corresponding to each resolution.
  • the video division information corresponding to the resolution may be the same as the source of the resolution.
  • the second video processing device may obtain the resolution from the same device. Rate and the video division information corresponding to the resolution.
  • the video division information corresponding to the resolution may indicate how many sub-video streams are divided into the processed video stream of the resolution.
  • the second video processing device may send multiple sub-video streams corresponding to the processed video stream to one or more second service devices, and at least one sub-video stream may exist in each second service device. In this way, when the terminal needs to display the processed video stream, the first video processing device can obtain different sub-video streams that compose the processed video stream from different second service devices in parallel, which is beneficial to improve the acquisition of the processed video stream. The efficiency of processing video streams.
  • the video division information corresponding to the resolution may indicate: how many sub-video streams are divided into the processed video stream of this resolution, and Where to divide the processing video stream with the same resolution.
  • the video division information corresponding to each of the aforementioned at least two resolutions may be preset according to user operations, or the second video processing device may receive the first instruction sent by the service device, and The first instruction may be used to indicate the aforementioned at least two resolutions and the video division information corresponding to each of the aforementioned at least two resolutions.
  • Step S405 For each of the aforementioned at least two processed video streams, the second video processing device divides the processed video stream into multiple sub-video streams according to the video division information corresponding to the resolution of the processed video stream .
  • the second video processing device may evenly divide the processed video stream into n sub-video streams.
  • the second video processing device may randomly divide the processed video stream into n sub video streams.
  • n can be greater than 1.
  • the processed video stream includes multiple frames of images, and the meaning of dividing the processed video stream is: dividing each frame of image in the processed video stream. Each frame of image in the same processed video stream is divided into the same position.
  • the second video processing device can divide the information according to the video.
  • the indicated division position divides the processed video stream with the same resolution.
  • FIG. 4b A schematic diagram of a scene for processing the video stream to be divided may be as shown in Fig. 4b.
  • the processed video stream can be divided into 2 sub-video streams (sub-video stream 1 and sub-video stream 2) according to the dotted line.
  • the video division information indicating division in the height direction of the processed video stream is only for example. In other feasible implementations, the video division information may also indicate division in the width direction of the processed video stream, or , Divided in both the width direction and the height direction.
  • each sub-video stream in the multiple sub-video streams obtained by dividing the processed video stream may carry the sub-video stream in By processing the position information in the video stream, it is convenient to splice and obtain the original processed video stream according to the position information carried by each sub-video stream.
  • the position information of the sub video stream in the processed video stream may indicate that the sub video stream is located on the upper side (middle or lower side) of the processed video stream.
  • the position information of the sub video stream in the processed video stream may indicate that the sub video stream is located on the left (middle or right) of the processed video stream.
  • the position information of the sub video stream in the processed video stream may indicate the coordinates of the sub video stream in the coordinate system corresponding to the processed video stream.
  • a complete processed video stream can be made up of multiple sub-video streams, and different sub-video streams that make up the same processed video stream can be sent to multiple sub-video streams.
  • a second service device In this way, when the terminal needs to display the processed video stream, it can obtain different sub-video streams composing the processed video stream from different second service devices in parallel, thereby helping to improve the efficiency of obtaining the processed video stream.
  • FIG. 5a is a schematic flowchart of another video processing method provided by an embodiment of the present application.
  • This method describes in detail how to synthesize at least two video streams to be displayed that the terminal needs to display into one target video stream.
  • the execution subject of step S501 to step S503 is the first video processing device, or a chip in the first video processing device.
  • the following takes the first video processing device as the execution subject of the video processing method as an example for description.
  • the method may include but is not limited to the following steps:
  • Step S501 The first video processing device obtains video layout parameters of the terminal, where the video layout parameters are used to indicate the identification information of at least two to-be-displayed video streams that the terminal needs to display and the resolution of each to-be-displayed video stream.
  • the terminal may send a video stream synthesis request to the first video processing device when it needs to display multiple video streams, and the video stream synthesis request may include the video layout parameters of the terminal.
  • the first video processing device can receive the video stream synthesis request sent by the terminal.
  • the identification information or resolution of the video stream to be displayed to be displayed by the terminal can be changed.
  • the first video processing device can be based on the video layout parameters sent by the terminal. , Obtain the to-be-displayed video stream that the terminal currently needs to display, so as to better meet the needs of terminal users.
  • the same identification information can correspond to one or more video streams, but the resolution of each video stream in the video stream corresponding to the same identification information can be different. Therefore, through the identification information of the at least two to-be-displayed video streams that the terminal needs to display and the resolution of each to-be-displayed video stream, at least two to-be-displayed video streams that the terminal needs to display can be determined.
  • the video stream synthesis request may include a uniform resource locator (uniform resource locator, URL), and the URL carries video layout parameters of the terminal.
  • URL uniform resource locator
  • the resolutions indicated by v1, v2, and v3 may be the same or different, which is not limited in the embodiment of the present application.
  • the aforementioned at least two video streams to be displayed can be displayed in different display areas in the display device of the terminal, and one display area can be used to display one video stream to be displayed.
  • the video layout parameter may be used to indicate the identification information and resolution of the video stream to be displayed that needs to be displayed in each display area of the terminal.
  • the identification information of the to-be-displayed video stream that needs to be displayed in the same display area of the same terminal changes, but the resolution thereof does not change.
  • the video stream synthesis request sent by the terminal to the first video processing device may only include: the identification information of the video stream to be displayed that the user wants to display in each display area of the display device of the terminal.
  • the first video processing device After receiving the video stream synthesis request sent by the terminal, the first video processing device can obtain the resolution corresponding to each display area in the terminal from the local database, and then determine the to-be-displayed video that needs to be displayed in each display area of the terminal The identification information of the stream and its resolution.
  • the display area of the terminal includes the left area and the right area
  • the user wants to display the resolution of the video stream to be displayed in the left area to be 1000x1000
  • the resolution of the video stream to be displayed in the right area is 500x500
  • the first video processing device may obtain and store the resolution corresponding to the left area and the right area of the terminal from the terminal.
  • the terminal needs to display at least two video streams, it can send identification information 1 and identification information 2 to the first video processing device, where the to-be-displayed video stream corresponding to the identification information 1 is used for display in the left area of the terminal, and the identification information The to-be-displayed video stream corresponding to information 2 is used for display in the right area of the terminal.
  • the first video processing device After receiving the identification information 1 and the identification information 2, the first video processing device combines the pre-stored resolution of the to-be-displayed video stream displayed by the terminal in the left area and the resolution of the to-be-displayed video stream displayed in the right area, It can be determined that the user wants the resolution of the to-be-displayed video stream indicated by the identification information 1 to be 1000x1000 when displayed in the terminal, and that the to-be-displayed video stream indicated by the identification information 2 is displayed in the terminal with a resolution of 500x500. In this way, the amount of data sent by the terminal to the first video processing device can be reduced.
  • the resolution of the video stream to be displayed that needs to be displayed in the same display area of the same terminal may be different.
  • the video stream composition request sent by the terminal to the first video processing device may be Including the identification information and resolution of the video stream to be displayed that needs to be displayed in each display area of the terminal.
  • Step S502 The first video processing device obtains the aforementioned at least two video streams to be displayed according to the video layout parameters.
  • the first video processing device may send a video stream acquisition request to the first service device, and the video stream acquisition request may include the identification information of the aforementioned at least two video streams to be displayed and the information of each video stream to be displayed. Resolution; and receiving the aforementioned at least two to-be-displayed video streams returned by the first service device.
  • the number of the first service device may be one or more. When the number of first service devices is multiple, the different to-be-displayed video streams obtained by the first video processing device may come from different first service devices.
  • Each first service device may have at least one processed video stream. After receiving the video stream acquisition request, the first service device may use the processed video stream with the same identification information and resolution as the video stream acquisition request as the to-be-displayed video stream. And send the to-be-displayed video stream to the first video processing device. That is, the to-be-displayed video stream obtained by the first video processing device may be the processed video stream in the embodiments shown in FIG. 2a to FIG. 4a.
  • the first video processing device obtains the foregoing at least two channels of video streams to be displayed according to the video layout parameters.
  • the specific implementation manner may be: for each channel of the foregoing at least two channels to be displayed video streams.
  • the resolutions of the multiplexed video stream are different from each other.
  • Each processed video stream in the multiplexed video stream is different from the to-be-displayed video stream.
  • the video streams have the same image content; among the multiplexed video streams, a processed video stream with the same resolution as the to-be-displayed video stream is used as the to-be-displayed video stream.
  • the same identification information may correspond to one or multiple processing video streams.
  • the identification information of the multiple processing video streams may be the same.
  • the identification information of different processed video streams with the same image content may be the same.
  • the processed video stream may include multiple frames of images. Different processed video streams having the same image content means that the corresponding images in each processing video stream have the same image content.
  • the to-be-processed image, the processed image 1 and the processed image 2 have the same image content, then the to-be-processed video stream to which the processed image belongs, the processed video stream 1 to which the processed image 1 belongs, and the process to which the processed image 2 belongs
  • the identification information of video stream 2 may be the same. It should be noted that multiple processed video streams with the same identification information may be obtained after the second video processing device adjusts the resolution of the same video stream to be processed (see the detailed description of step S203 in FIG. 2a).
  • the first video processing device may use a processed video stream with the same resolution as the to-be-displayed video stream in the multiplexed video stream as the to-be-displayed video stream.
  • the multiplexed video stream corresponding to the identification information of the to-be-displayed video stream may be stored in the local database of the first video processing device. At this time, the first video processing device may obtain from the local database and the to-be-displayed video stream.
  • the identification information corresponds to the multiplexed video stream.
  • the first video processing device may send a processing video stream acquisition request to the service device, and the processing video stream acquisition request may include the identification information of the aforementioned at least two video streams to be displayed; Multiplexed video streams corresponding to the identification information of each video stream to be displayed in the video stream to be displayed.
  • the number of service devices can be one or more.
  • the first video processing device obtains the aforementioned at least two video streams to be displayed according to the video layout parameters, and the specific implementation manner may also be: for each of the aforementioned at least two video streams to be displayed Send the index acquisition request carrying the identification information to the third service device, and receive the index of the multiplexed video stream corresponding to the identification information returned by the third service device and the resolution of each processed video stream Rate; the first video processing device determines the target index from the index of the multiplexed video stream, and the resolution of the processed video stream corresponding to the target index is the same as the resolution of the video stream to be displayed; sends the port to the third service device There is a stream acquisition request with the target index, and the processed video stream corresponding to the target index returned by the third service device is received, and the processed video stream corresponding to the target index is used as the to-be-displayed video stream.
  • the index of each processing video stream and the processing video stream corresponding to each index are obtained.
  • the resolution to determine the video stream to be displayed can reduce the amount of data transmitted between the first video processing device and the third service device.
  • the first service device, the second service device, and the third service device may all be the service device 103 in FIG. 1.
  • Step S503 The first video processing device combines the aforementioned at least two video streams to be displayed into one target video stream, and displays the target video stream on the terminal.
  • the first video processing device may synthesize the aforementioned at least two to-be-displayed video streams into a target video stream, and display the target video stream on the terminal. Since the target video stream is composed of the aforementioned at least two to-be-displayed video streams, the picture presented in the terminal when displaying the target video stream is spliced by at least two sub-pictures, so that the user can watch multiple sub-pictures in the terminal at the same time. It should be noted that the first video processing device and the terminal may be integrated into the same physical entity or separately integrated into different physical entities. When the first video processing device and the terminal are integrated in different physical entities, after the first video processing device synthesizes the target video stream, the target video stream may be sent to the terminal for display.
  • the first video processing device may obtain the to-be-displayed video stream indicated by the changed (or newly-added) identification information, and then combine the newly obtained to-be-displayed video stream and the to-be-displayed video stream indicated by the unchanged identification information into a target video flow.
  • the first video processing device may not need to re-acquire the to-be-displayed video stream indicated by the unchanged (or non-newly added) identification information.
  • the video stream 1 to be displayed has been obtained before the video stream 2 is synthesized.
  • the first video processing device only needs to obtain the video stream 3 to be displayed, and can synthesize the video stream 1 to be displayed and the video stream 3 to be displayed.
  • the to-be-displayed video stream 1 is multiplexed, which improves the utilization rate of the to-be-displayed video stream 1.
  • the first video processing device may perform encapsulation processing on the target video stream, and send the encapsulated target video stream to the terminal.
  • the target video stream is a video stream, so after the terminal receives the encapsulated target video stream, it performs a decapsulation operation and the terminal only needs one video player to achieve the purpose of displaying multiple sub-pictures.
  • each video stream to be displayed may include multiple frames of images, and each frame of image may carry play time (for a description of play time, refer to the description of step S303 in FIG. 3a).
  • the specific implementation manner of the first video processing device for synthesizing the aforementioned at least two to-be-displayed video streams into one target video stream may be: synthesizing the images with the same playing time in the aforementioned at least two to-be-displayed video streams into one frame of target image, and all targets The images form a target video stream.
  • the images with the same playback time are the images collected at the same time. In this way, it can be ensured that the frames of images that make up the target image are collected at the same time, that is, when the target video stream is displayed in the terminal, multiple sub-pictures displayed at the same time in the terminal are at the same time. Picture.
  • the video layout parameter may also indicate the display position of the at least two to-be-displayed video streams that the terminal needs to display when displayed in the terminal.
  • the display position may be the position of the video stream to be displayed in the display device of the terminal, for example, the coordinate area occupied by the video stream to be displayed in the display device.
  • the first video processing device may synthesize at least two to-be-displayed video streams required to be displayed by the terminal into one target video stream according to the display position of each to-be-displayed video stream required to be displayed by the terminal when displayed in the terminal.
  • the video layout parameter indicates that the terminal needs to display the video stream to be displayed 1, the video stream to be displayed 2 and the video stream to be displayed 3, and the video layout parameter also indicates the video stream to be displayed 1, the video stream to be displayed 2 and the video to be displayed
  • the display positions of stream 3 when displayed in the terminal are the left, upper right, and lower right corners, the video stream to be displayed 1, the video stream to be displayed 2, and the video stream to be displayed 3 are combined into a target video stream. As shown in Figure 5b.
  • the aforementioned at least two video streams to be displayed may include a first video stream to be displayed and a second video stream to be displayed; if the resolution of the first video stream to be displayed is higher than that of the second video stream to be displayed
  • the resolution of the stream, the display area occupied by the first video stream to be displayed in the terminal may be larger than the display area occupied by the second video stream to be displayed in the terminal. It can be understood that, compared to the video stream that occupies a smaller display area in the terminal, the user pays more attention to the video stream that occupies a larger display area in the terminal. In this way, the resolution of a video stream that occupies a larger display area in the terminal can be made higher, that is, a video stream that occupies a larger display area in the terminal is clearer, which is conducive to improving user experience.
  • the terminal By implementing the embodiments of the present application, it is advantageous to display a target video stream composed of at least two video streams to be displayed in the terminal, and the screen presented in the terminal is formed by splicing at least two sub-pictures when the target video stream is displayed.
  • the target video stream is a video stream
  • the terminal after receiving the encapsulated target video stream, the terminal performs a decapsulation operation and the terminal only needs one video player to achieve the purpose of displaying multiple sub-pictures.
  • FIG. 6 is a schematic flowchart of another video processing method provided by an embodiment of the present application.
  • This method describes in detail how to obtain multiple sub-video streams corresponding to the identification information and resolution of the video stream to be displayed, and how to synthesize the multiple sub-video streams into the video stream to be displayed.
  • the execution subject of steps S601 to S605 is the first video processing device, or a chip in the first video processing device.
  • the following takes the first video processing device as the execution subject of the video processing method as an example for description.
  • the method may include but is not limited to the following steps:
  • Step S601 The first video processing device obtains the video layout parameters of the terminal, where the video layout parameters are used to indicate the identification information of at least two to-be-displayed video streams that the terminal needs to display and the resolution of each to-be-displayed video stream.
  • step S601 For the execution process of step S601, refer to the specific description of step S501 in FIG. 5a, which will not be repeated here.
  • Step S602 Regarding the identification information of each of the aforementioned at least two to-be-displayed video streams, the first video processing device sends a sub-video stream acquisition request to the second service device, and the sub-video stream acquisition request includes the to-be-displayed video stream.
  • the identification information and resolution of the video stream are included in the sub-video stream acquisition request.
  • the first video processing device may obtain at least two to-be-displayed video streams that the terminal needs to display according to video layout parameters, and each to-be-displayed video stream may be composed of multiple sub-video streams.
  • the sub-video stream acquisition request sent by the first video processing device to the second service device may be used to request to obtain the sub-video streams that compose each video stream to be displayed that the terminal needs to display.
  • the number of the second service device can be one or more.
  • the different sub-video streams that make up the same video stream to be displayed can come from the same or different second service devices, and each second service device can store components Part or all of the sub video streams of the video stream to be displayed.
  • the first video processing device can obtain different sub-video streams from different second service devices in parallel to form a complete video stream to be displayed, thereby helping to improve the efficiency of obtaining the video stream to be displayed.
  • the resolution in the sub-video stream acquisition request refers to the resolution of the to-be-displayed video stream composed of multiple sub-video streams that the first video processing device needs to acquire.
  • the identification information 1 corresponds to the to-be-displayed video stream 1 and the to-be-displayed video stream 2
  • the resolutions of the to-be-displayed video stream 1 and the to-be-displayed video stream 2 are respectively 1000x1000 and 500x500
  • the to-be-displayed video stream 1 consists of sub video stream 1
  • the sub-video stream 2 and the sub-video stream 3 are composed, and the to-be-displayed video stream 2 is composed of the sub-video stream 4 and the sub-video stream 5
  • the sub-video stream acquisition request sent by the first video processing device includes identification information 1 and a resolution of 1000x1000
  • the first A video processing device can receive sub video stream 1, sub video stream 2, and sub video stream 3.
  • Step S603 The first video processing device receives multiple sub video streams corresponding to the identification information and resolution of the video stream to be displayed returned by the second service device.
  • the first video processing device may receive multiple sub video streams corresponding to the identification information and resolution of the video stream to be displayed returned by one or more second service devices.
  • Step S604 The first video processing device synthesizes the multiple sub video streams into the to-be-displayed video stream.
  • the first video processing device may synthesize the multiple sub-video streams into the to-be-displayed video stream.
  • each sub-video stream used to compose the same video stream to be displayed may include multiple frames of images, and each sub-video stream includes the same number of images.
  • the specific implementation manner of synthesizing the multiple sub-video streams into the to-be-displayed video stream may be: according to the order of the image frames in each sub-video stream in the multiple sub-video streams, the images with the same order of the image frames in each sub-video stream The splicing is the image to be displayed, and all the images to be displayed form the video stream to be displayed.
  • each frame of image in each sub-video stream may carry a playback time (for a description of the playback time, refer to the description of step S303 in FIG. 3a).
  • the specific implementation manner of the first video processing device for synthesizing the multiple sub-video streams into the to-be-displayed video stream may be: synthesizing the images with the same playing time in the multiple sub-video streams into a frame of to-be-displayed images, and all the to-be-displayed images are composed The video stream to be displayed.
  • the multiple sub-video streams corresponding to the video stream to be displayed may be obtained by the second video processing device by dividing the processed video stream, and the processed video stream is the video stream to be displayed.
  • each sub-video stream may carry the position information of the sub-video stream in the corresponding processed video stream
  • the first video processing device may obtain the position information of each sub-video stream in the corresponding processed video stream.
  • the location information is used to synthesize the obtained multiple sub-video streams into a video stream to be displayed. In this way, it is helpful to accurately and quickly synthesize the video stream to be displayed.
  • the position information of the sub video stream in the corresponding processed video stream may indicate that the sub video stream is located on the upper side (middle, lower, left, or right) of the processed video stream, or it may indicate that the sub video stream is located The coordinates in the coordinate system corresponding to the processed video stream.
  • Step S605 The first video processing device combines the aforementioned at least two video streams to be displayed into one target video stream, and displays the target video stream on the terminal.
  • step S605 refers to the specific description of step S503 in FIG. 5a, which will not be repeated here.
  • the to-be-displayed video stream to be displayed by the terminal is composed of multiple sub-video streams
  • synthesis processing on the multiple sub-video streams
  • a complete to-be-displayed video stream can be synthesized.
  • the synthesized at least two to-be-displayed video streams can be synthesized into a target video stream that the user wants to display on the terminal, and the picture presented in the terminal when the target video stream is displayed is formed by splicing at least two sub-pictures. In this way, the user can watch at least two sub-screens in the terminal at the same time.
  • FIG. 7 is a schematic structural diagram of a first video processing device provided by an embodiment of the present application.
  • the device may be a first video processing device or a device (such as a chip) having the function of the first video processing device.
  • a video processing device 70 is configured to execute the steps performed by the first video processing device in the method embodiment corresponding to FIG. 5a-6, and the first video processing device 70 includes:
  • the obtaining module 701 is configured to obtain video layout parameters of the terminal, where the video layout parameters are used to indicate the identification information of at least two to-be-displayed video streams that the terminal needs to display and the resolution of each to-be-displayed video stream;
  • the obtaining module 701 is further configured to obtain the aforementioned at least two video streams to be displayed according to the video layout parameters;
  • the processing module 702 is configured to synthesize the at least two video streams to be displayed into one target video stream, and display the target video stream on the terminal.
  • the obtaining module 701 when used to obtain the video layout parameters of the terminal, it may be specifically used to receive a video stream synthesis request sent by the terminal, where the video stream synthesis request includes the video layout parameters of the terminal.
  • the obtaining module 701 when the obtaining module 701 is configured to obtain the aforementioned at least two video streams to be displayed according to the video layout parameters, it may be specifically used to send a video stream obtaining request to the first service device, and the video stream obtaining request includes The identification information of the aforementioned at least two to-be-displayed video streams and the resolution of each to-be-displayed video stream; and the aforementioned at least two to-be-displayed video streams returned by the first service device are received.
  • the obtaining module 701 is configured to obtain the aforementioned at least two to-be-displayed video streams according to the video layout parameters, specifically used to identify the identification of each of the aforementioned at least two to-be-displayed video streams.
  • Information obtain the multiplexed video stream corresponding to the identification information of the to-be-displayed video stream, the resolution of the multiplexed video stream is different from each other, and each processed video stream in the multiplexed video stream is the same as the to-be-displayed video
  • the streams have the same image content; among the multiplexed video streams, a processed video stream with the same resolution as the to-be-displayed video stream is used as the to-be-displayed video stream.
  • the obtaining module 701 is configured to obtain the aforementioned at least two to-be-displayed video streams according to the video layout parameters, specifically used to identify the identification of each of the aforementioned at least two to-be-displayed video streams.
  • Information send a sub-video stream acquisition request to the second service device, the sub-video stream acquisition request includes the identification information and resolution of the to-be-displayed video stream; receive the identification information returned by the second service device and the to-be-displayed video stream Multiple sub-video streams corresponding to the resolution; combining the multiple sub-video streams into the to-be-displayed video stream.
  • each video stream to be displayed includes multiple frames of images, and each frame of image carries playback time; the processing module 702 is used to synthesize the aforementioned at least two video streams to be displayed into one target video stream.
  • the processing module 702 is used to synthesize the aforementioned at least two video streams to be displayed into one target video stream.
  • images with the same playing time are synthesized into one frame of target image, and all target images form one target video stream.
  • FIG. 8 is a schematic structural diagram of another first video processing device provided by an embodiment of the present application.
  • the device may be a first video processing device or a device (such as a chip) with the function of the first video processing device.
  • the first video processing device 80 may include a communication interface 801, a processor 802, and a memory 803.
  • the communication interface 801, the processor 802, and the memory 803 may be connected to each other through one or more communication buses, or may be connected in other ways.
  • the related functions implemented by the acquisition module 701 and the processing module 702 shown in FIG. 7 may be implemented by the same processor 802, or may be implemented by multiple different processors 802.
  • the communication interface 801 may be used to send data and/or signaling, and receive data and/or signaling. Applied in the embodiment of the present application, the communication interface 801 can be used to receive a video stream synthesis request sent by a terminal.
  • the communication interface 801 may be a transceiver.
  • the processor 802 is configured to perform corresponding functions of the first video processing device in the methods described in FIGS. 5a-6.
  • the processor 802 may include one or more processors.
  • the processor 802 may be one or more central processing units (CPUs), network processors (network processors, NPs), hardware chips, or any of them. combination.
  • the processor 802 is a CPU
  • the CPU may be a single-core CPU or a multi-core CPU.
  • the memory 803 is used to store program codes and the like.
  • the memory 803 may include a volatile memory (volatile memory), such as a random access memory (random access memory, RAM); the memory 803 may also include a non-volatile memory (non-volatile memory), such as a read-only memory (read-only memory). Only memory (ROM), flash memory (flash memory), hard disk drive (HDD), or solid-state drive (SSD); the memory 803 may also include a combination of the foregoing types of memories.
  • ROM read-only memory
  • flash memory flash memory
  • HDD hard disk drive
  • SSD solid-state drive
  • the memory 803 may also include a combination of the foregoing types of memories.
  • the first video processing device 80 including the memory 803 is only used as an example, and does not constitute a limitation to the embodiment of the present application. In an implementation manner, the memory 803 can be replaced by other storage media with storage functions.
  • the processor 802 may call the program code stored in the memory 803 to make the first video processing device 80 perform the following operations:
  • Video layout parameters of the terminal where the video layout parameters are used to indicate the identification information of at least two to-be-displayed video streams that the terminal needs to display and the resolution of each to-be-displayed video stream;
  • the at least two video streams to be displayed are combined into one target video stream, and the target video stream is displayed on the terminal.
  • the first video processing device 80 may specifically perform the following operations: receiving the terminal The sent video stream synthesis request, where the video stream synthesis request includes the video layout parameters of the terminal.
  • the processor 802 calls the program code stored in the memory 803 to make the first video processing device 80 execute according to the video layout parameters, when acquiring the aforementioned at least two video streams to be displayed, specifically, the first video processing can be performed
  • the device 80 performs the following operations: sending a video stream acquisition request to the first service device, where the video stream acquisition request includes the identification information of the at least two to-be-displayed video streams and the resolution of each to-be-displayed video stream; and receiving the The at least two to-be-displayed video streams returned by the first service device.
  • the processor 802 calls the program code stored in the memory 803 to make the first video processing device 80 execute according to the video layout parameters, when acquiring the aforementioned at least two video streams to be displayed, specifically, the first video processing can be performed
  • the device 80 performs the following operations: for the identification information of each of the aforementioned at least two to-be-displayed video streams, obtain a multiplexed video stream corresponding to the identification information of the to-be-displayed video stream, and the multiplexed video stream The resolutions of each of the multiplexed video streams are different from each other.
  • Each processed video stream in the multiplexed video stream has the same image content as the to-be-displayed video stream; the multiplexed video stream has the same resolution as the to-be-displayed video stream. Process the video stream as the to-be-displayed video stream.
  • the processor 802 calls the program code stored in the memory 803 to make the first video processing device 80 execute according to the video layout parameters, when acquiring the aforementioned at least two video streams to be displayed, specifically, the first video processing can be performed
  • the device 80 performs the following operations: For the identification information of each of the aforementioned at least two to-be-displayed video streams, send a sub-video stream acquisition request to the second service device, and the sub-video stream acquisition request includes the to-be-displayed video stream Receive the multiple sub-video streams corresponding to the identification information and resolution of the to-be-displayed video stream returned by the second service device; and synthesize the multiple sub-video streams into the to-be-displayed video stream.
  • each video stream to be displayed includes multiple frames of images, and each frame of image carries playback time; the processor 802 calls the program code stored in the memory 803 to make the first video processing device 80 execute the above-mentioned at least two
  • the first video processing device 80 may specifically perform the following operations: combine the images with the same playing time in the aforementioned at least two video streams to be displayed into one frame of target image, and all target images Form a target video stream.
  • the processor 802 may also call the program code stored in the memory 803 to make the first video processing apparatus 80 execute the operation corresponding to the first video processing device in the embodiment shown in FIG. 5a-6.
  • the processor 802 may also call the program code stored in the memory 803 to make the first video processing apparatus 80 execute the operation corresponding to the first video processing device in the embodiment shown in FIG. 5a-6.
  • the method embodiment The description in, I won’t repeat it here.
  • FIG. 9 is a schematic structural diagram of a second video processing device provided by an embodiment of the present application.
  • the device may be a second video processing device or a device (such as a chip) with the function of the second video processing device.
  • the second video processing device 90 is configured to execute the steps performed by the second video processing device in the method embodiments corresponding to FIGS. 2a to 4a, and the second video processing device 90 may include:
  • the determining module 901 is configured to determine at least two resolutions
  • the obtaining module 902 is used to obtain the to-be-processed video stream
  • the resolution adjustment module 903 is configured to adjust the resolution of the to-be-processed video stream to obtain at least two processed video streams; wherein the resolutions of the at least two processed video streams are different from each other, and the at least two processed video streams
  • the resolution of each processed video stream in the at least two resolutions is the same as one of the aforementioned at least two resolutions, and each processed video stream in the at least two processed video streams has the same image content as the to-be-processed video stream .
  • the aforementioned at least two resolutions are preset.
  • the determining module 701 when configured to determine at least two resolutions, it may be specifically configured to receive a first instruction sent by a service device, where the first instruction is used to instruct the aforementioned at least two resolutions.
  • the number of the to-be-processed video streams is at least two, and each of the to-be-processed video streams includes multiple frames of images, and each frame of image carries the collection time;
  • the second video processing device 90 may also include a processing module 904 , Used to synchronize at least two frames of images whose acquisition time is within the same synchronization window in the aforementioned at least two to-be-processed video streams. After synchronization, at least two frames of images whose acquisition time is within the same synchronization window carry the same playback time.
  • the second video processing device 90 may further include a division module 905; the acquisition module 902 may also be used to acquire video division information corresponding to each of the aforementioned at least two resolutions; the division module 905 , Can be used to divide the processed video stream into multiple sub-video streams according to the video division information corresponding to the resolution of the processed video stream for each of the aforementioned at least two processed video streams.
  • FIG. 10 is a schematic structural diagram of another second video processing device provided by an embodiment of the present application.
  • the device may be a second video processing device or a device (such as a chip) having the function of a second video processing device.
  • the second video processing device 100 may include a communication interface 1001, a processor 1002, and a memory 1003.
  • the communication interface 1001, the processor 1002, and the memory 1003 may be connected to each other through one or more communication buses, or may be connected in other ways.
  • the related functions implemented by the determination module 901, the acquisition module 902, the resolution adjustment module 903, the processing module 904, and the division module 905 shown in FIG. 9 can be implemented by the same processor 1002, or by multiple different processors. 1002 to achieve.
  • the communication interface 1001 may be used to send data and/or signaling, and receive data and/or signaling. Applied in the embodiment of the present application, the communication interface 1001 may be used to receive the first instruction sent by the service device.
  • the communication interface 1001 may be a transceiver.
  • the processor 1002 is configured to perform corresponding functions of the second video processing device in the methods described in FIGS. 2a to 4a.
  • the processor 1002 may include one or more processors.
  • the processor 1002 may be one or more central processing units (CPU), network processors (NP), hardware chips, or any of them. combination.
  • the processor 1002 is a CPU, the CPU may be a single-core CPU or a multi-core CPU.
  • the memory 1003 is used to store program codes and the like.
  • the memory 1003 may include a volatile memory (volatile memory), such as random access memory (random access memory, RAM); the memory 1003 may also include a non-volatile memory (non-volatile memory), such as a read-only memory (read-only memory). Only memory (ROM), flash memory (flash memory), hard disk drive (HDD), or solid-state drive (SSD); the memory 1003 may also include a combination of the foregoing types of memories.
  • the second video processing device 100 including the memory 1003 is only used as an example, and does not constitute a limitation to the embodiment of the present application. In an implementation manner, the memory 1003 can be replaced by other storage media with storage functions.
  • the processor 1002 may call the program code stored in the memory 1003 to make the second video processing apparatus 100 perform the following operations:
  • the aforementioned at least two resolutions are preset.
  • the processor 1002 when the processor 1002 calls the program code stored in the memory 1003 to enable the second video processing apparatus 100 to perform the determination of at least two resolutions, it may specifically cause the second video processing apparatus 100 to perform the following operations: Receive service The first instruction sent by the device, where the first instruction is used to instruct the aforementioned at least two resolutions.
  • the number of the to-be-processed video streams is at least two, and each of the to-be-processed video streams includes multiple frames of images, and each frame of image carries the acquisition time;
  • the processor 1002 can also call a program stored in the memory 1003 Code to enable the second video processing device 100 to perform the following operations: perform synchronization processing on at least two frames of images in the aforementioned at least two to-be-processed video streams whose acquisition time is within the same synchronization window, and after synchronization processing, the acquisition time is within the same synchronization window. At least two frames of images carry the same playback time.
  • the processor 1002 may also call the program code stored in the memory 1003 to enable the second video processing apparatus 100 to perform the following operations: obtain video division information corresponding to each of the aforementioned at least two resolutions ; For each of the aforementioned at least two processed video streams, the processed video stream is divided into multiple sub-video streams according to the video division information corresponding to the resolution of the processed video stream.
  • the processor 1002 may also call the program code stored in the memory 1003 to enable the second video processing apparatus 100 to perform operations corresponding to the second video processing device in the embodiment shown in FIGS. 2a to 4a.
  • the processor 1002 may also call the program code stored in the memory 1003 to enable the second video processing apparatus 100 to perform operations corresponding to the second video processing device in the embodiment shown in FIGS. 2a to 4a.
  • An embodiment of the present application also provides a video processing system.
  • the video processing system includes the foregoing first video processing device as shown in FIG. 7 and the foregoing second video processing device as shown in FIG. 9, or the video processing system includes The foregoing first video processing device as shown in FIG. 8 and the foregoing second video processing device as shown in FIG. 10.
  • the embodiment of the present application also provides a computer-readable storage medium, which can be used to store computer software instructions used by the first video processing device in the embodiment shown in FIG. The program designed by the equipment.
  • An embodiment of the present application also provides a computer-readable storage medium, which can be used to store computer software instructions used by the second video processing device in the embodiment shown in FIG. The program designed by the equipment.
  • the above-mentioned computer-readable storage medium includes, but is not limited to, flash memory, hard disk, and solid-state hard disk.
  • the embodiments of the present application also provide a computer program product.
  • the computer product When the computer product is run by a computing device, it can execute the method designed for the first video processing device in the above-mentioned embodiments of FIG. 5a-6.
  • the embodiments of the present application also provide a computer program product.
  • the computer product When the computer product is run by a computing device, it can execute the method designed for the second video processing device in the embodiment of FIG. 2a to FIG. 4a.
  • a chip including a processor and a memory.
  • the memory includes a processor and a memory.
  • the memory is used to store a computer program.
  • the processor is used to call and run the computer program from the memory.
  • the computer program is used to implement the method in the above method embodiment.
  • the computer program product includes one or more computer programs.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer program may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer program may be downloaded from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a high-density digital video disc (digital video disc, DVD)), or a semiconductor medium (for example, a solid state disk, SSD)) etc.
  • At least one in this application can also be described as one or more, and at least two can also be described as two or more than two.
  • the number can be two, three, four or more, which is not limited in this application.
  • the technical feature is distinguished by “first”, “second”, “third”, “A”, “B”, “C”, and “D”, etc.
  • first”, “Second”, “Third”, “A”, “B”, “C” and “D” there is no order or size order among the technical features.
  • the corresponding relationships shown in the tables in this application can be configured or pre-defined.
  • the value of the information in each table is only an example, and can be configured to other values, which is not limited in this application.
  • the corresponding relationship shown in some rows may not be configured.
  • appropriate deformation adjustments can be made based on the above table, such as splitting, merging, and so on.
  • the names of the parameters shown in the titles in the above tables may also be other names that can be understood by the communication device, and the values or expressions of the parameters may also be other values or expressions that can be understood by the communication device.
  • other data structures can also be used, such as arrays, queues, containers, stacks, linear tables, pointers, linked lists, trees, graphs, structures, classes, heaps, hash tables, or hash tables. Wait.
  • the pre-definition in this application can be understood as definition, pre-definition, storage, pre-storage, pre-negotiation, pre-configuration, curing, or pre-fired.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本申请实施例公开了一种视频处理方法及其装置,该方法应用于第一视频处理装置,该方法包括:获取终端的视频布局参数,该视频布局参数用于指示该终端所需显示的至少两路待显示视频流的标识信息以及每路待显示视频流的分辨率;根据该视频布局参数,获取前述至少两路待显示视频流;将所述至少两路待显示视频流合成一路目标视频流,并在所述终端上显示所述目标视频流。通过实施本申请实施例,有利于在终端中显示由至少两路待显示视频流合成的目标视频流。

Description

一种视频处理方法及其装置
本申请要求于2020年1月22日提交中国专利局、申请号为202010076016.6、申请名称为“一种视频处理方法及其装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及多媒体技术领域,尤其涉及一种视频处理方法及其装置。
背景技术
随着多媒体技术的发展,可以通过置于不同位置或角度的摄像机对现场进行拍摄,即多机位拍摄。通过多机位拍摄可以更加全面、清楚地了解现场情况。
目前,由导播人员从不同机位拍摄的画面中选择一个画面,并将该画面推送给终端进行显示,这样会使得终端无法同时显示多个画面。
发明内容
本申请实施例提供一种视频处理方法及其装置,有利于在终端中显示由至少两路待显示视频流合成的目标视频流,显示该目标视频流时该终端中呈现的画面由至少两个子画面拼接而成。
第一方面,本申请实施例提供了一种视频处理方法,该方法应用于第一视频处理装置中,该方法包括:获取终端的视频布局参数,该视频布局参数用于指示该终端所需显示的至少两路待显示视频流的标识信息以及每路待显示视频流的分辨率;根据该视频布局参数,获取前述至少两路待显示视频流;将所述至少两路待显示视频流合成一路目标视频流,并在所述终端上显示所述目标视频流。
在该技术方案中,有利于在终端中显示由至少两路待显示视频流合成的目标视频流,显示该目标视频流时该终端中呈现的画面由至少两个子画面拼接而成。另一方面,由于目标视频流是一路视频流,因此终端仅执行一次解封装操作且终端仅需一个视频播放器,即可实现显示多个子画面的目的。
在一种实现方式中,获取终端的视频布局参数的具体实施方式可以为:接收该终端发送的视频流合成请求,该视频流合成请求包括该终端的视频布局参数。
在该技术方案中,通过终端向第一视频处理设备发送视频布局参数的方式,可以使得终端所需显示的待显示视频流的标识信息或者分辨率发生变化的情况,第一视频处理设备可以根据终端发送的视频布局参数,获取终端当前所需显示的待显示视频流,从而有利于更好地满足终端用户的需求。
在一种实现方式中,根据该视频布局参数,获取前述至少两路待显示视频流的具体实施方式可以为:向第一服务设备发送视频流获取请求,该视频流获取请求包括前述至少两路待显示视频流的标识信息以及每路待显示视频流的分辨率;接收该第一服务设备返回的前述至少两路待显示视频流。
在一种实现方式中,第一服务设备的数量可以为多个,不同待显示视频流可以来自于 不同的第一服务设备。
在该技术方案中,当第一服务设备的数量为多个时,第一视频处理设备获取的不同待显示视频流可以来自于不同的第一服务设备。通过这种方式,可以并行地从不同第一服务设备中获取不同的待显示视频流,从而有利于提高所需显示的至少两路待显示视频流的获取效率。
在一种实现方式中,根据该视频布局参数,获取前述至少两路待显示视频流的具体实施方式可以为:针对前述至少两路待显示视频流中每路待显示视频流的标识信息,获取与该待显示视频流的标识信息对应的多路处理视频流,该多路处理视频流的分辨率互不相同,该多路处理视频流中每路处理视频流与该待显示视频流具有相同的图像内容;将该多路处理视频流中与该待显示视频流的分辨率相同的处理视频流作为该待显示视频流。
在一种实现方式中,多路处理视频流可以存储于本地数据库。
在一种实现方式中,根据该视频布局参数,获取前述至少两路待显示视频流的具体实施方式可以为:针对前述至少两路待显示视频流中每路待显示视频流的标识信息,向第二服务设备发送子视频流获取请求,该子视频流获取请求包括该待显示视频流的标识信息和分辨率;接收该第二服务设备返回的与该待显示视频流的标识信息和分辨率对应的多路子视频流;将该多路子视频流合成为该待显示视频流。
在该技术方案中,当终端所需显示的待显示视频流由多路子视频流组成时,通过对该多路子视频流进行合成处理,可以合成得到完整的待显示视频流。然后可以将合成的至少两路待显示视频流合成为用户希望在终端上显示的目标视频流,显示该目标视频流时该终端中呈现的画面由至少两个子画面拼接而成。通过这种方式,使得用户可以在终端中同时观看至少两个子画面。
在一种实现方式中,根据该视频布局参数,获取前述至少两路待显示视频流的具体实施方式可以为:针对前述至少两路待显示视频流中每路待显示视频流的标识信息,向第三服务设备发送携带有该标识信息的索引获取请求,并接收第三服务设备返回的与该标识信息对应的多路处理视频流的索引以及每路处理视频流的分辨率;从该多路处理视频流的索引中确定目标索引,该目标索引对应的处理视频流的分辨率与该待显示视频流的分辨率相同;向第三服务设备发送携带有该目标索引的流获取请求,并接收第三服务设备返回的该目标索引对应的处理视频流,将该目标索引对应的处理视频流作为该待显示视频流。
在该技术方案中,相较于获取与该待显示视频流的标识信息对应的多路处理视频流并从中确定该待显示视频流的方式,通过获取各路处理视频流的索引以及每个索引对应的处理视频流的分辨率以确定待显示视频流,可以减少第一视频处理设备和第三服务设备之间传输的数据量。
在一种实现方式中,每路待显示视频流包括多帧图像,每帧图像携带有播放时间;将前述至少两路待显示视频流合成一路目标视频流的具体实施方式可以为:将前述至少两路待显示视频流中播放时间相同的图像合成为一帧目标图像,所有目标图像组成一路目标视频流。
在该技术方案中,播放时间相同的图像即为在同一时间下采集得到的图像。通过这种方式,可以确保组成目标图像的各帧图像是在同一时间下采集得到的,即可以确保在终端 中显示目标视频流时,在该终端中同时显示的多个子画面是同一时间下的画面。
在一种实现方式中,前述至少两路待显示视频流可以包括第一待显示视频流和第二待显示视频流;若第一待显示视频流的分辨率高于第二待显示视频流的分辨率,则第一待显示视频流在终端中占据的显示面积可以大于第二待显示视频流在该终端中占据的显示面积。
在该技术方案中,可以使得在终端中占据的显示面积更大的视频流的分辨率更高,即在终端中占据的显示面积更大的视频流更清晰。
第二方面,本申请实施例提供了另一种视频处理方法,该方法应用于第二视频处理装置中,该方法包括:确定至少两种分辨率,获取待处理视频流;对该待处理视频流进行分辨率调整,得到至少两路处理视频流;其中,该至少两路处理视频流的分辨率互不相同,该至少两路处理视频流中的每路处理视频流的分辨率与前述至少两种分辨率中的一种分辨率相同,该至少两路处理视频流中的每路处理视频流与该待处理视频流具有相同的图像内容。
在该技术方案中,通过对待处理视频流进行分辨率调整,以得到至少两路分辨率互不相同但包括的图像内容相同的处理视频流,有利于更好地适应终端对所显示视频流的分辨率要求。
在一种实现方式中,得到前述至少两路处理视频流之后,该方法还可以包括:将前述至少两路处理视频流发送至一个或多个第一服务设备,每个第一服务设备中存在至少一路处理视频流。
在一种实现方式中,前述至少两种分辨率是预先设置的。
在一种实现方式中,确定至少两种分辨率的具体实施方式可以为:接收服务设备发送的第一指令,该第一指令用于指示前述至少两种分辨率。
在一种实现方式中,该待处理视频流的数量为至少两路,每路待处理视频流包括多帧图像,每帧图像携带有采集时间;该方法还可以包括:对前述至少两路待处理视频流中采集时间处于同一同步窗口内的至少两帧图像进行同步处理,同步处理后采集时间处于同一同步窗口内的至少两帧图像均携带有相同的播放时间。
在该技术方案中,通过对前述至少两路待处理视频流中采集时间处于同一同步窗口内的至少两帧图像进行同步处理,即对实际上是在同一时间下采集得到的至少两帧图像进行同步处理,以使得同一同步窗口内的至少两帧图像均携带有相同的播放时间,从而有利于在终端上同时显示同一时间下采集的至少两帧图像。
在一种实现方式中,该方法还可以包括:获取与前述至少两种分辨率中每种分辨率对应的视频划分信息;针对前述至少两路处理视频流中的每路处理视频流,根据与该处理视频流的分辨率对应的视频划分信息,将该处理视频流划分为多路子视频流。
在该技术方案中,通过将该处理视频流划分为多路子视频流,可以使得一路完整的处理视频流由多个子视频流组成,进而可以将组成同一处理视频流的不同子视频流发送至多个第二服务设备。通过这种方式,当终端需要显示该处理视频流时,可以并行地从不同第二服务设备中获取组成该处理视频流的不同子视频流,从而有利于提高该处理视频流的获取效率。
在一种实现方式中,将该处理视频流划分为多路子视频流之后,该方法还可以包括:将该多路子视频流发送至一个或多个第二服务设备,每个第二服务设备中存在至少一路子视频流。
第三方面,本申请实施例提供一种第一视频处理装置,该装置为第一视频处理设备或具有第一视频处理设备功能的装置(例如芯片)。该装置具有实现第一方面所提供的视频处理方法的功能,该功能通过硬件实现或通过硬件执行相应的软件实现。该硬件或软件包括一个或多个与上述功能相对应的模块。
第四方面,本申请实施例提供一种第二视频处理装置,该装置为第二视频处理设备或具有第二视频处理设备功能的装置(例如芯片)。该装置具有实现第二方面所提供的视频处理方法的功能,该功能通过硬件实现或通过硬件执行相应的软件实现。该硬件或软件包括一个或多个与上述功能相对应的模块。
第五方面,本申请实施例提供另一种第一视频处理装置,该装置为第一视频处理设备或具有第一视频处理设备功能的装置(例如芯片)。该装置包括处理器和存储介质,存储介质中存储有指令,该指令被该处理器运行时,使得该装置实现第一方面所提供的视频处理方法。
第六方面,本申请实施例提供另一种第二视频处理装置,该装置为第二视频处理设备或具有第二视频处理设备功能的装置(例如芯片),该装置包括处理器和存储介质,存储介质中存储有指令,该指令被该处理器运行时,使得该装置实现第二方面所提供的视频处理方法。
第七方面,本申请实施例提供一种视频处理系统,该视频处理系统包括第三方面所述的第一视频处理装置以及第四方面所述的第二视频处理装置,或者,该视频处理系统包括第五方面所述的第一视频处理装置以及第六方面所述的第二视频处理装置。
第八方面,本申请实施例提供一种计算机可读存储介质,用于储存上述第三方面描述的第一视频处理装置所使用的计算机程序指令,其包含用于执行上述第一方面的方法所涉及的程序。
第九方面,本申请实施例提供一种计算机可读存储介质,用于储存上述第四方面描述的第二视频处理装置所使用的计算机程序指令,其包含用于执行上述第二方面的方法所涉及的程序。
第十方面,本申请实施例提供一种计算机程序产品,该程序产品包括程序,该程序被第一视频处理装置执行时,使得该装置实现上述第一方面描述的方法。
第十一方面,本申请实施例提供一种计算机程序产品,该程序产品包括程序,该程序被第二视频处理装置执行时,使得该装置实现上述第二方面描述的方法。
附图说明
图1是本申请实施例公开的一种视频处理系统的架构示意图;
图2a是本申请实施例公开的一种视频处理方法的流程示意图;
图2b是本申请实施例公开的一种对待处理视频流中的待处理图像进行分辨率调整的场景示意图;
图3a是本申请实施例公开的另一种视频处理方法的流程示意图;
图3b是本申请实施例公开的一种对图像1、图像2和图像3进行同步处理的场景示意图;
图4a是本申请实施例公开的又一种视频处理方法的流程示意图;
图4b是本申请实施例公开的一种对处理视频流进行划分的场景示意图;
图5a是本申请实施例公开的又一种视频处理方法的流程示意图;
图5b是本申请实施例公开的一种将待显示视频流1、待显示视频流2和待显示视频流3合成一路目标视频流的场景示意图;
图6是本申请实施例公开的又一种视频处理方法的流程示意图;
图7是本申请实施例公开的一种第一视频处理装置的结构示意图;
图8是本申请实施例公开的另一种第一视频处理装置的结构示意图;
图9是本申请实施例公开的一种第二视频处理装置的结构示意图;
图10是本申请实施例公开的另一种第二视频处理装置的结构示意图。
具体实施方式
为了便于理解,首先介绍本申请涉及的术语。
分辨率:又称解析度、解像度,分辨率可以细分为显示分辨率、图像分辨率、打印分辨率和扫描分辨率等。
其中,显示分辨率(又称屏幕分辨率)是指显示器所能显示的像素有多少。显示分辨率一定的情况下,显示屏越小图像越清晰,反之,显示屏大小固定时,显示分辨率越高图像越清晰。图像分辨率可以指单位英寸所包含的像素点数。本申请实施例中提及的分辨率可以指图像分辨率。
分辨率可以用每一个方向上的像素数量表示。例如,图像1的分辨率为640x480表示:图像1的宽度方向上有640个像素,图像1的高度方向上有480个像素。可选的,分辨率也可以用每英寸像素(pixel per inch,ppi)以及图像的宽度和高度表示。例如,图像2的分辨率为72ppi和8x6英寸表示:图像2的宽度为8英寸,高度为6英寸,且每英寸包括72个像素。需要说明的是,本申请实施例对分辨率所采用的形式不做限定。
为了更好的理解本申请实施例公开的一种视频处理方法,下面首先对本申请实施例适用的视频处理系统进行描述。
请参见图1,图1是本申请实施例公开的一种视频处理系统的架构示意图。如图1所示,该视频处理系统包括:多个视频采集设备101、第二视频处理设备102、服务设备103、第一视频处理设备104和终端设备105。
其中,各个视频采集设备101可以用于采集待处理视频流,并将采集到的待处理视频流发送至第二视频处理设备102。需要说明的是,不同视频采集设备101采集的待处理视频流不同,如图1所示其中一个视频采集设备101采集的待处理视频流1与另一个视频采集设备101采集的待处理视频流2不同。待处理视频流不同可以指待处理视频流包括的图像内容不同。可以理解的是,第二视频处理设备102接收到的待处理视频流可以是经视频 采集设备101进行编码后得到的适于网络传输的视频流。
第二视频处理设备102可以用于获取至少两种分辨率,并根据至少两种分辨率对各路(解码后的)待处理视频流进行分辨率调整。对每路待处理视频流进行分辨率调整之后可以得到至少两路分辨率互不相同的处理视频流,且每路处理视频流与该待处理视频流具有相同的图像内容。
对每路待处理视频流进行分辨率调整之后得到的处理视频流的数量可以与前述至少两种分辨率的种类数量相同,且得到的处理视频流中的每路处理视频流的分辨率可以与前述至少两种分辨率中的一种分辨率相同。
第二视频处理设备102得到每路待处理视频流对应的至少两路处理视频流之后,可以将每路待处理视频流对应的至少两路处理视频流发送至服务设备103。
需要说明的是,本申请实施例公开的视频处理方法可以应用于直播场景或非直播场景,图1中的服务设备103可以为存储设备或者分发设备。当应用于直播场景下,图1中的服务设备可以为分发设备,该分发设备可以用于接收每路待处理视频流对应的至少两路处理视频流。当应用于非直播场景下,图1中的服务设备103可以为存储设备,该存储设备可以用于将每路待处理视频流的标识信息与该待处理视频流对应的至少两路处理视频流进行关联存储。
在本申请实施例中,终端设备105可以在其显示设备中同时显示多路视频流。当用户希望同时观看多个子画面时,可以通过用户操作触发终端设备105生成视频流合成请求。该视频流合成请求可以包括终端设备105的视频布局参数,该视频布局参数可以用于指示终端设备105所需显示的至少两路待显示视频流的标识信息以及每路待显示视频流的分辨率。
终端设备105生成视频流合成请求之后,可以将该视频流合成请求发送至第一视频处理设备104。第一视频处理设备104接收到视频流合成请求之后,可以向服务设备103发送视频流获取请求以请求获取终端设备105所需显示的至少两路待显示视频流。
当服务设备103为分发设备的情况下,分发设备可以包括一个中心分发设备和多个边缘分发设备,中心分发设备可以用于接收第二视频处理设备102发送的每路待处理视频流对应的至少两路处理视频流,并将每路待处理视频流对应的至少两路处理视频流发送至各个边缘分发设备。边缘分发设备可以用于就近响应第一视频处理设备104发送的视频流获取请求。具体的,中心分发设备可以为内容分发网络(content delivery network,CDN)中的源服务器,边缘分发设备可以为CDN中的缓存服务器。
第一视频处理设备104接收到服务设备103返回的至少两路待显示视频流之后,可以将该至少两路待显示视频流合成一路目标视频流,并将该目标视频流发送给终端设备105,以在终端设备105上显示该目标视频流。可以理解的是,在终端设备105上显示的目标视频流是由至少两路待显示视频流合成的,显示目标视频流时终端设备105所呈现的画面由至少两个子画面拼接而成。因此,用户可以同时在终端设备105中观看多个子画面。
其中,视频采集设备101可以为具有视频采集功能的实体,例如,摄像头、摄像机、相机、扫描仪或其他具有视频采集功能的设备(手机、平板电脑等)。显示设备可以为具有图像输出功能的显示屏。需要说明的是,在本申请实施例中,终端设备在显示由至少两 路待显示视频流合成的目标视频流时,还可以输出各路待显示视频流对应的音频。在此情况下,图1所示视频处理系统还可以包括与各个视频采集设备对应的声音采集设备。第二视频处理设备102和第一视频处理设备104均可以由处理器、存储器和网络接口组成。具体的,第二视频处理设备102和第一视频处理设备104均可以是服务器。
终端设备105可以是用户侧的一种用于接收或发射信号的实体,如手机。终端设备也可以称为终端(terminal)、用户设备(user equipment,UE)、移动台(mobile station,MS)、移动终端(mobile terminal,MT)等。终端设备可以是手机(mobile phone)、智能电视、穿戴式设备、平板电脑(Pad)、带无线收发功能的电脑、虚拟现实(virtual reality,VR)终端设备、增强现实(augmented reality,AR)终端设备、工业控制(industrial control)中的无线终端、无人驾驶(self-driving)中的无线终端、远程手术(remote medical surgery)中的无线终端、智能电网(smart grid)中的无线终端、运输安全(transportation safety)中的无线终端、智慧城市(smart city)中的无线终端、智慧家庭(smart home)中的无线终端等等。本申请实施例对终端设备所采用的具体技术和具体设备形态不做限定。
需要说明的是,图1中第二视频处理设备102、第一视频处理设备104均作为独立的设备仅用于举例,并不构成对本申请实施例的限定。在一种实现方式中,第二视频处理设备102可以集成于视频采集设备101中或者集成于服务设备103中。第一视频处理设备104可以集成于终端设备105中或者集成于服务设备103中。换言之,第二视频处理设备102执行的步骤可以由视频采集设备101或者服务设备103替代执行,第一视频处理设备104执行的步骤可以由终端设备105或者服务设备103替代执行。
还需要说明的是,图1所示视频处理系统包括2个视频采集设备101仅用于举例,并不构成对本申请实施例的限定。在其他可行的实现方式中,视频处理系统可以包括2个以上视频采集设备。
可以理解的是,本申请实施例描述的通信系统是为了更加清楚的说明本申请实施例的技术方案,并不构成对本申请实施例提供的技术方案的限定,本领域技术人员可知,随着系统架构的演变和新业务场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
下面结合附图对本申请提供的视频处理方法及其装置进行详细地介绍。
请参见图2a,图2a是本申请实施例提供的一种视频处理方法的流程示意图。该方法详细描述了如何对待处理视频流进行分辨率调整,以得到至少两路分辨率互不相同且图像内容相同的处理视频流。其中,步骤S201~步骤S203的执行主体为第二视频处理设备,或者为第二视频处理设备中的芯片,以下以第二视频处理设备为视频处理方法的执行主体为例进行说明。如图2a所示,该方法可以包括但不限于如下步骤:
步骤S201:第二视频处理设备确定至少两种分辨率。
第二视频处理设备确定的至少两种分辨率可以为终端显示视频流时所支持的分辨率,也可以为在终端中显示视频流时用户所希望达到的分辨率。第二视频处理设备所确定的至少两种分辨率可以互不相同。
在一种实现方式中,前述至少两种分辨率可以是预先设置的。具体的,第二视频处理 设备可以根据用户操作预先设置前述至少两种分辨率。
在一种实现方式中,第二视频处理设备可以接收服务设备发送的第一指令,该第一指令可以用于指示前述至少两种分辨率。在本申请实施例中,第一视频处理设备向服务设备发送视频流获取请求,以请求获取终端设备所需显示的至少两路待显示视频流,该视频流获取请求可以包括终端设备所需显示的至少两路待显示视频流中每路待显示视频流的分辨率。服务设备在接收到来自第一视频处理设备的视频流获取请求之后,若确定该视频流获取请求中的分辨率与上一次接收到的(来自第一视频处理设备的)视频流获取请求中的分辨率不同,则该服务设备可以向第二视频处理设备发送前述第一指令。
在一种实现方式中,服务设备可以接收多个第一视频处理设备发送的视频流获取请求,服务设备可以在接收到的所有视频流获取请求中的大部分视频流获取请求中的分辨率变化的情况下,向第二视频处理设备发送前述第一指令。
在本申请实施例中,第一视频处理设备在接收到来自终端设备的视频流合成请求之后,才向服务设备发送视频流获取请求。在一种实现方式中,视频流合成请求和视频流获取请求中均可以包括该终端设备的标识,若该终端设备的标识为预设设备标识,则服务设备可以向第二视频处理设备发送前述第一指令。其中,预设设备标识可以为预设的具有调节分辨率权限的终端设备的标识。服务设备仅在确定终端设备的标识为预设设备标识的情况下,才向第二视频处理设备发送前述第一指令,可以避免频繁地向第二视频处理设备发送前述第一指令,相应的,有利于降低第二视频处理设备在短时间内接收到多个第一指令的概率,从而有利于避免第二视频处理设备频繁地重新确定分辨率的情况。
在本申请实施例中,第二视频处理设备确定至少两种分辨率用于对待处理视频流进行分辨率调整。
步骤S202:第二视频处理设备获取待处理视频流。
其中,待处理视频流的数量可以为一路或多路。多路待处理视频流中的各路待处理视频流可以具有互不相同的图像内容。例如,该多路待处理视频流可以为同一现场的不同视角下采集到的不同视频流,或者,该多路待处理视频流可以是同一时间下不同现场下采集到的不同视频流。
在一种实现方式中,该多路待处理视频流可以由同一个设备发送给第二视频处理设备,该多路待处理视频流中的各路待处理视频流可以由与该设备相连接的不同视频采集设备采集得到。与该设备相连接的不同视频采集设备可以用于采集同一现场的不同视角的视频流,或者,与该设备相连接的不同视频采集设备可以用于采集同一时间下不同现场的视频流。其中,该设备可以通过物理连接方式或者逻辑连接方式与视频采集设备相连接。
在一种实现方式中,该多路待处理视频流可以由至少两个设备发送给第二视频处理设备的待处理视频流组成。例如,包括3路待处理视频流时,其中两路待处理视频流可以来自于同一设备,另外一路待处理视频流可以来自于另一个设备。
在一种实现方式中,第二视频处理设备可以从本地数据库中获取多路待处理视频流,本地数据库中存储的待处理视频流可以由与该第二视频处理设备相连接的视频采集设备采集得到。
步骤S203:第二视频处理设备对该待处理视频流进行分辨率调整,得到至少两路处理 视频流;其中,至少两路处理视频流的分辨率互不相同,该至少两路处理视频流中的每路处理视频流的分辨率与前述至少两种分辨率中的一种分辨率相同,至少两路处理视频流中的每路处理视频流与该待处理视频流具有相同的图像内容。
具体的,第二视频处理设备获取待处理视频流之后,可以根据至少两种分辨率对该待处理视频流进行分辨率调整,得到至少两路分辨率互不相同的处理视频流。在本申请实施例中,对待处理视频流进行分辨率调整之后得到的处理视频流的数量可以与第二视频处理设备确定的分辨率的种类数量相同。且每路处理视频流的分辨率可以与前述至少两种分辨率中的一种分辨率相同。例如,当第二视频处理设备确定的两种分辨率为500x500和1000x1000时,对待处理视频流进行分辨率调整之后可以得到两路处理视频流,其中一路处理视频流的分辨率可以为500x500,另一路处理视频流的分辨率可以为1000x1000。需要说明的是,待处理视频流包括多帧图像,同一待处理视频流中每帧图像的分辨率相同,且每帧图像的分辨率即为该待处理视频流的分辨率。对该待处理视频流进行分辨率调整的含义可以为:对该待处理视频流中的每帧图像进行分辨率调整。
在本申请实施例中,对待处理视频流进行分辨率调整之后得到的至少两路处理视频流中的每路处理视频流可以与该待处理视频流具有相同的图像内容。例如,当待处理视频流包括3帧图像,对该待处理视频流进行分辨率调整之后得到处理视频流1和处理视频流2,且处理视频流1和处理视频流2均包括3帧图像时,处理视频流1和处理视频流2中的第一帧图像均可以与待处理视频流中的第一帧图像的图像内容相同,同理,处理视频流1和处理视频流2中的第二帧图像均可以与待处理视频流中的第二帧图像的图像内容相同,处理视频流1和处理视频流2中的第三帧图像均可以与待处理视频流中的第三帧图像的图像内容相同。
其中,处理视频流1中的第一帧图像与待处理视频流中的第一帧图像的图像内容相同的含义可以为:当显示处理视频流1中的第一帧图像时显示设备中呈现的画面,与显示该待处理视频流中的第一帧图像时该显示设备中呈现的画面相同。例如,若第二视频处理设备确定的两种分辨率为500x500和1000x1000时,对待处理视频流中的待处理图像进行分辨率调整的场景示意图可以如图2b所示。由图2b可知,通过对待处理图像进行分辨率调整,可以得到分辨率不同但包括的图像内容相同的2个处理图像(即处理图像1和处理图像2),其中,处理图像1的分辨率为1000x1000,处理图像2的分辨率为500x500。待处理图像、处理图像1和处理图像2包括的图像内容均为相同的猫咪头像。
在实际情况下,在不同终端中显示图像内容相同的处理视频流时,不同终端可以具有不同的分辨率要求。例如,其中一个终端希望显示包括图2b中的处理图像1的处理视频流,另一个终端希望显示包括图2b中的处理图像2的处理视频流,即其中一个终端希望显示的处理视频流的分辨率为1000x1000,另一个终端希望显示的处理视频流的分辨率为500x500。可选的,在不同场景下,在同一终端中显示图像内容相同的处理视频流时,该终端也可以具有不同的分辨率要求。因此,通过对待处理视频流进行分辨率调整,以得到至少两路分辨率互不相同但包括的图像内容相同的处理视频流,有利于更好地适应终端对所显示视频流的分辨率要求。
在一种实现方式中,第二视频处理设备得到每路待处理视频流对应的至少两路处理视 频流之后,可以将该待处理视频流对应的至少两路处理视频流发送至一个或多个第一服务设备,每个第一服务设备中存在该待处理视频流对应的至少一路处理视频流,换言之,每个第一服务设备中存在该待处理视频流对应的全部(或部分)处理视频流。其中,第一服务设备可以为存储设备或者分发设备。具体的,可以将每路待处理视频流对应的至少两路处理视频流均发送至内容分发网络(content delivery network,CDN)中的源服务器,然后由该源服务器将该待处理视频流对应的至少两路处理视频流分发至多个缓存服务器,即每个缓存服务器中均可以存储有该待处理视频流对应的至少两路处理视频流。通过这种方式,当用户希望在终端中同时显示至少两路视频流时,可以通过就近的缓存服务器响应用户请求,即从距离较近的缓存服务器中获取所需的视频流。在一种实现方式中,也可以从较近的多个缓存服务器中获取不同视频流,以组成所需播放的至少两路视频流。
在一种实现方式中,第二视频处理设备可以对每路处理视频流进行封装处理,然后将封装后的处理视频流发送至第一服务设备。第一服务设备可以对接收到的封装后的处理视频流进行解封装,也可以不对其进行解封装。换言之,第一服务设备中存在的处理视频流可以是解封装之后的视频流,也可以是封装的视频流。
在本申请实施例中,通过对待处理视频流进行分辨率调整,以得到至少两路分辨率互不相同但包括的图像内容相同的处理视频流,有利于更好地适应终端对所显示视频流的分辨率要求。
请参见图3a,图3a是本申请实施例提供的另一种视频处理方法的流程示意图。该方法详细描述了如何对前述至少两路待处理视频流中采集时间处于同一同步窗口内的至少两帧图像进行同步处理,以使得同步处理后采集时间处于同一同步窗口内的至少两帧图像均携带有相同的播放时间。其中,步骤S301~步骤S304的执行主体为第二视频处理设备,或者为第二视频处理设备中的芯片,以下以第二视频处理设备为视频处理方法的执行主体为例进行说明。如图3a所示,该方法可以包括但不限于如下步骤:
步骤S301:第二视频处理设备确定至少两种分辨率。
步骤S302:第二视频处理设备获取至少两路待处理视频流,每路待处理视频流包括多帧图像,每帧图像携带有采集时间。
在本申请实施例中,第二视频处理设备获取的每路待处理视频流可以包括多帧图像,每帧图像可以携带有各自的采集时间。采集时间可以表示该图像被视频采集设备所采集时,该视频采集设备的系统时间。实际情况下,视频采集设备的系统时间可能与实际时间之间存在偏差,这会导致由该视频采集设备所采集的每帧图像携带的采集时间可能并非该图像实际被采集的时间。在此情况下,第二视频处理设备可以确定所获取的各路待处理视频流对应的视频采集设备的偏差时间,并根据该偏差时间,对该路待处理视频流中的每帧图像的采集时间进行调整,调整后每帧图像携带的采集时间即为该图像实际被采集的时间。具体的,对于某一图像,该图像调整后的采集时间可以为在调整前的采集时间上叠加采集该图像的视频采集设备的偏差时间得到。
在一种实现方式中,还可以通过实际时间对视频采集设备的系统时间进行校准,以使该视频采集设备的系统时间与实际时间一致。通过对视频采集设备的系统时间进行校准, 可以确保由校准后的视频采集设备所采集的待处理视频流中的每帧图像的采集时间即为该图像实际被采集的时间,从而可以避免对待处理视频流中的每帧图像的采集时间进行调整。另外,实际情况下,在同一时间下不同视频采集设备的系统时间可以不同。因此,在同一时间下由不同视频采集设备采集到的图像携带的采集时间可以不同。在此情况下,可以通过实际时间分别对各个视频采集设备的系统时间进行校准,以确保各个视频采集设备的系统时间均与实际时间一致。
需要说明的是,步骤S301~步骤S302的其余执行过程可分别参见图2a中步骤S201~步骤S202的具体描述,此处不再赘述。
步骤S303:第二视频处理设备对前述至少两路待处理视频流中采集时间处于同一同步窗口内的至少两帧图像进行同步处理,同步处理后采集时间处于同一同步窗口内的至少两帧图像均携带有相同的播放时间。
在本申请实施例中,第二视频处理设备获取至少两路待处理视频流之后,可以根据各路待处理视频流中图像携带的采集时间判断各路待处理视频流中的图像是否是同一时间下采集的。若待处理视频流1中各帧图像的采集时间分别与待处理视频流2中各帧图像的采集时间相同,则表明待处理视频流1和待处理视频流2中的各帧图像均在同一时间下采集得到。但是在实际情况下,视频采集设备在通过网络或其他方式将采集的待处理视频流传输至第二视频处理设备的传输过程中,待处理视频流中图像携带的采集时间可能会发生变化。这样会导致携带的采集时间相同的图像实际上可能并非是同一时间下采集得到的,而携带的采集时间不同的图像实际上是同一时间下采集得到的。
在此情况下,第二视频处理设备可以确定获取的至少两路待处理视频流中采集时间处于同一同步窗口内的至少两帧图像是在同一时间下采集的。其中,同步窗口的时长可以小于图像采集间隔时长,图像采集间隔时长可以是视频采集设备采集相邻两帧图像之间间隔的时长,即视频采集设备的帧率的倒数。例如,视频采集设备的帧率为24帧/秒时,图像采集间隔时长约为0.0417秒,也即在时长为0.0417秒的图像采集时间段内可以采集得到一帧图像。由于图像的采集时间在传输过程中不会发生很大的变化,因此,前述至少两路待处理视频流中至少两帧图像的采集时间处于同一同步窗口内可以表示:前述至少两路待处理视频流中至少两帧图像实际是在同一时间下采集得到的。进一步的,第二视频处理设备可以对前述至少两路待处理视频流中采集时间处于同一同步窗口内的至少两帧图像进行同步处理,同步处理后,每帧图像可以携带有播放时间,且前述采集时间处于同一同步窗口内的至少两帧图像均携带有相同的播放时间。
在本申请实施例中,在终端中同时显示的至少两帧图像的播放时间相同,通过对前述至少两路待处理视频流中采集时间处于同一同步窗口内的至少两帧图像进行同步处理,即对实际上是在同一时间下采集得到的至少两帧图像进行同步处理,以使得处于同一同步窗口内的至少两帧图像均携带有相同的播放时间,从而有利于在终端上同时显示同一时间下采集的至少两帧图像。另一方面,同步窗口的时长小于图像采集间隔时长,可以避免对前后采集的相邻两帧图像进行同步。在一种实现方式中,播放时间可以为数字视频压缩格式H264中的显示时间戳(presentation time stamp,PTS)。
当同步窗口的时长为30毫秒(ms),第二视频处理设备获取的待处理视频流1中的图 像1携带的采集时间为00:10(秒:毫秒),待处理视频流2中的图像2携带的采集时间为00:20,待处理视频流3中的图像3携带的采集时间为00:30时,对图像1、图像2和图像3进行同步处理的场景示意图可以如图3b所示。在图3b中,灰色填充多边形表示待处理视频流中的图像,时间轴表示第二视频处理设备获取的待处理视频流中的图像所携带的采集时间(即经传输而变化后的采集时间)。同步窗口为以图像2携带的采集时间为中心,时长为30ms的时间段。由图3b可知,图像1、图像2和图像3携带的采集时间均位于同一同步窗口内,此时,第二视频处理设备可以将图像携带2的采集时间作为图像1、图像2和图像3(图3b未示出)的播放时间。
在一种实现方式中,第二视频处理设备可以将同步窗口的中心时间作为采集时间处于该同步窗口内的至少两帧图像的播放时间。需要说明的是,图3b中同步窗口为以图像2携带的采集时间为中心,时长为30ms的时间段仅用于举例,并不构成对本申请实施例的限定。另外,图3b中待处理视频流(如待处理视频流1、待处理视频流2和待处理视频流3)还可以包括其他图像。以待处理视频流1为例,对于待处理视频流1中除图像1以外其他图像的同步处理过程,除了同步窗口不同以外,均可与图像1的同步处理过程相同。在一种实现方式中,第二视频处理设备可以根据上一个同步窗口占据的时间段,确定当前同步窗口占据的时间段,进而根据当前同步窗口占据的时间段和中心时间对前述至少两路待处理视频流中采集时间处于当前同步窗口内的至少两帧图像进行同步处理。每个同步窗口的时长可以相同,且上一个同步窗口的结束时间可以为当前同步窗口的开始时间。
步骤S304:针对前述至少两路待处理视频流中的每路待处理视频流,第二视频处理设备对该待处理视频流进行分辨率调整,得到至少两路处理视频流;其中,至少两路处理视频流的分辨率互不相同,该至少两路处理视频流中的每路处理视频流的分辨率与前述至少两种分辨率中的一种分辨率相同,至少两路处理视频流中的每路处理视频流与该待处理视频流具有相同的图像内容。
需要说明的是,步骤S304的执行过程可参见图2a中步骤S203的具体描述,此处不再赘述。
在本申请实施例中,通过对前述至少两路待处理视频流中采集时间处于同一同步窗口内的至少两帧图像进行同步处理,即对实际上是在同一时间下采集得到的至少两帧图像进行同步处理,以使得同一同步窗口内的至少两帧图像均携带有相同的播放时间,从而有利于在终端上同时显示同一时间下采集的至少两帧图像。
请参见图4a,图4a是本申请实施例提供的又一种视频处理方法的流程示意图。该方法详细描述了如何将处理视频流划分为多路子视频流。其中,步骤S401~步骤S405的执行主体为第二视频处理设备,或者为第二视频处理设备中的芯片,以下以第二视频处理设备为视频处理方法的执行主体为例进行说明。如图4a所示,该方法可以包括但不限于如下步骤:
步骤S401:第二视频处理设备确定至少两种分辨率。
步骤S402:第二视频处理设备获取待处理视频流。
步骤S403:第二视频处理设备对该待处理视频流进行分辨率调整,得到至少两路处理视频流;其中,至少两路处理视频流的分辨率互不相同,该至少两路处理视频流中的每路 处理视频流的分辨率与前述至少两种分辨率中的一种分辨率相同,至少两路处理视频流中的每路处理视频流与该待处理视频流具有相同的图像内容。
需要说明的是,步骤S401~步骤S403的执行过程可分别参见图2a中步骤S201~步骤S203的具体描述,此处不再赘述。
步骤S404:第二视频处理设备获取与前述至少两种分辨率中每种分辨率对应的视频划分信息。
在本申请实施例中,第二视频处理设备获取至少两种分辨率的同时,还可以获取与每种分辨率对应的视频划分信息。具体的,针对前述至少两种分辨率中的每种分辨率,与该分辨率对应的视频划分信息可以与该分辨率的来源相同,换言之,第二视频处理设备可以从同一设备中获取该分辨率以及与该分辨率对应的视频划分信息。与该分辨率对应的视频划分信息可以指示:将该分辨率的处理视频流划分为多少路子视频流。进一步的,第二视频处理设备可以将该处理视频流对应的多路子视频流发送至一个或多个第二服务设备,每个第二服务设备中可以存在至少一路子视频流。通过这种方式,在终端需要显示该处理视频流时,第一视频处理设备可以并行地从不同第二服务设备中获取用于组成该处理视频流的不同子视频流,从而有利于提高获取该处理视频流的效率。
在一种实现方式中,针对前述至少两种分辨率中的每种分辨率,与该分辨率对应的视频划分信息可以指示:将该分辨率的处理视频流划分为多少路子视频流,以及在与该分辨率相同的处理视频流中的何位置进行划分。在一种实现方式中,与前述至少两种分辨率中的各种分辨率对应的视频划分信息可以根据用户操作预先设置,或者,第二视频处理设备可以接收服务设备发送的第一指令,该第一指令可以用于指示前述至少两种分辨率,以及与前述至少两种分辨率中的各种分辨率对应的视频划分信息。
步骤S405:针对前述至少两路处理视频流中的每路处理视频流,第二视频处理设备根据与该处理视频流的分辨率对应的视频划分信息,将该处理视频流划分为多路子视频流。
具体的,针对前述至少两路处理视频流中的每路处理视频流,若与该处理视频流的分辨率对应的视频划分信息指示:将该分辨率的处理视频流划分为n路子视频流,则第二视频处理设备可以将该处理视频流均匀划分为n路子视频流。或者,第二视频处理设备可以将该处理视频流随机划分为n路子视频流。其中,n可以大于1。需要说明的是,处理视频流包括多帧图像,对处理视频流进行划分的含义为:对该处理视频流中的每帧图像进行划分。对同一处理视频流中的每帧图像进行划分的位置相同。
在一种实现方式中,若与该分辨率对应的视频划分信息还可以指示:在与该分辨率相同的处理视频流中的何位置进行划分,则第二视频处理设备可以按照该视频划分信息所指示的划分位置,对与该分辨率相同的处理视频流进行划分。
当第二视频处理设备确定的其中一种分辨率为1000x1000,且与分辨率1000x1000对应的视频划分信息指示在分辨率为1000x1000的处理视频流的高度方向上的1/3处进行划分时,对该处理视频流进行划分的场景示意图可以如图4b所示。如图4b所示,可以按照虚线将该处理视频流划分为2路子视频流(子视频流1和子视频流2)。需要说明的是,视频划分信息指示在处理视频流的高度方向上进行划分仅用于举例,在其他可行的实现方式中,视频划分信息也可以指示在处理视频流的宽度方向上进行划分,或者,在宽度方向 和高度方向上均进行划分。
在一种实现方式中,针对前述至少两路处理视频流中的每路处理视频流,对该处理视频流进行划分得到的多路子视频流中的每路子视频流可以携带有该子视频流在该处理视频流中的位置信息,这样可以便于根据各个子视频流携带的位置信息,拼接得到原本的处理视频流。若子视频流是在高度方向上划分得到的,则该子视频流在处理视频流中的位置信息可以指示该子视频流位于处理视频流中的上侧(中间或者下侧)。若子视频流是在宽度方向上划分得到的,则该子视频流在处理视频流中的位置信息可以指示该子视频流位于处理视频流中的左侧(中间或者右侧)。若子视频流是在高度方向和宽度方向上划分得到的,则该子视频流在处理视频流中的位置信息可以指示该子视频流位于处理视频流对应的坐标系中的坐标。
在本申请实施例中,通过将该处理视频流划分为多路子视频流,可以使得一路完整的处理视频流由多个子视频流组成,进而可以将组成同一处理视频流的不同子视频流发送至多个第二服务设备。通过这种方式,当终端需要显示该处理视频流时,可以并行地从不同第二服务设备中获取组成该处理视频流的不同子视频流,从而有利于提高该处理视频流的获取效率。
请参见图5a,图5a是本申请实施例提供的又一种视频处理方法的流程示意图。该方法详细描述了如何将终端所需显示的至少两路待显示视频流合成一路目标视频流。其中,步骤S501~步骤S503的执行主体为第一视频处理设备,或者为第一视频处理设备中的芯片,以下以第一视频处理设备为视频处理方法的执行主体为例进行说明。该方法可以包括但不限于如下步骤:
步骤S501:第一视频处理设备获取终端的视频布局参数,该视频布局参数用于指示该终端所需显示的至少两路待显示视频流的标识信息以及每路待显示视频流的分辨率。
在本申请实施例中,终端可以在需要显示多路视频流时,向第一视频处理设备发送视频流合成请求,该视频流合成请求可以包括该终端的视频布局参数。相应的,第一视频处理设备可以接收该终端发送的视频流合成请求。通过终端向第一视频处理设备发送视频布局参数的方式,可以使得终端所需显示的待显示视频流的标识信息或者分辨率发生变化的情况,第一视频处理设备可以根据终端发送的视频布局参数,获取终端当前所需显示的待显示视频流,从而有利于更好地满足终端用户的需求。
同一标识信息可以对应有一路或多路视频流,但同一标识信息对应的视频流中每路视频流的分辨率可以不同。因此,通过该终端所需显示的至少两路待显示视频流的标识信息以及每路待显示视频流的分辨率,可以确定出该终端所需显示的至少两路待显示视频流。
在一种实现方式中,视频流合成请求可以包括统一资源定位符(uniform resource locator,URL),该URL携带有终端的视频布局参数。例如,URL为http://myexample.com/mystream?main=1&v1=2&v2=3&v3=4时,其中,http为传输协议,myexample.com为某设备的域名,该设备中存在用户所需的待显示视频流,/mystream为该设备中存储用户所需的待显示视频流的路径。main=1&v1=2&v2=3&v3=4中的1、2、3和4可以为待显示视频流的标识信息,main可以用于指示较高的分辨率(如1000x1000),v 可以用于指示较低的分辨率(如500x500)。main=1可以表示需要在标识信息1指示的视频流中获取分辨率与main指示的分辨率相同的待显示视频流。v1=2可以表示需要在标识信息2指示的视频流中获取与v1指示的分辨率相同的待显示视频流。同理可知v2=3、v3=4的含义,此处不再赘述。在一种实现方式中,v1、v2、v3指示的分辨率可以相同,也可以各不相同,本申请实施例对此不做限定。
在一种实现方式中,前述至少两路待显示视频流可以在该终端的显示设备中的不同显示区域中显示,一个显示区域可以用于显示一路待显示视频流。视频布局参数可以用于指示在终端的各个显示区域中所需显示的待显示视频流的标识信息及其分辨率。在一种实现方式中,若在不同场景下,同一终端中的同一显示区域所需显示的待显示视频流的标识信息发生变化,但是其分辨率并未变化。此时,终端向第一视频处理设备发送的视频流合成请求可以仅包括:用户希望在该终端的显示设备中的各个显示区域中显示的待显示视频流的标识信息。第一视频处理设备接收到终端发送的视频流合成请求之后,可以从本地数据库中获取该终端中各个显示区域对应的分辨率,进而确定在该终端的各个显示区域中所需显示的待显示视频流的标识信息及其分辨率。
例如,终端的显示区域包括左侧区域和右侧区域,且用户希望在左侧区域显示的待显示视频流的分辨率为1000x1000,在右侧区域显示的待显示视频流的分辨率为500x500时,第一视频处理设备可以从终端中获取并存储该终端的左侧区域对应的分辨率和右侧区域对应的分辨率。当该终端需要显示至少两路视频流时,可以向第一视频处理设备发送标识信息1和标识信息2,其中,标识信息1对应的待显示视频流用于在终端的左侧区域中显示,标识信息2对应的待显示视频流用于在终端的右侧区域中显示。第一视频处理设备接收到标识信息1和标识信息2之后,结合预先存储的该终端在左侧区域显示的待显示视频流的分辨率和在右侧区域显示的待显示视频流的分辨率,可以确定用户希望标识信息1指示的待显示视频流在终端中显示时的分辨率为1000x1000,标识信息2指示的待显示视频流在终端中显示时的分辨率为500x500。通过这种方式,可以减少终端向第一视频处理设备发送的数据量。
在一种实现方式中,在不同场景下,同一终端中的同一显示区域所需显示的待显示视频流的分辨率可以不同,此时,终端向第一视频处理设备发送的视频流合成请求可以包括该终端中各个显示区域所需显示的待显示视频流的标识信息及其分辨率。
步骤S502:第一视频处理设备根据该视频布局参数,获取前述至少两路待显示视频流。
在本申请实施例中,第一视频处理设备可以向第一服务设备发送视频流获取请求,该视频流获取请求可以包括前述至少两路待显示视频流的标识信息以及每路待显示视频流的分辨率;并接收该第一服务设备返回的前述至少两路待显示视频流。其中,第一服务设备的数量可以为一个或多个。当第一服务设备的数量为多个时,第一视频处理设备获取的不同待显示视频流可以来自于不同的第一服务设备。通过这种方式,可以并行地从不同第一服务设备中获取不同的待显示视频流,从而有利于提高所需显示的至少两路待显示视频流的获取效率。每个第一服务设备中可以存在至少一路处理视频流,第一服务设备接收到视频流获取请求之后,可以将与该视频流获取请求中标识信息与分辨率均相同的处理视频流作为待显示视频流,并将该待显示视频流发送给第一视频处理设备。即第一视频处理设备 获取的待显示视频流可以为图2a~图4a所示实施例中的处理视频流。
在一种实现方式中,第一视频处理设备根据视频布局参数,获取前述至少两路待显示视频流的具体实施方式可以为:针对前述至少两路待显示视频流中每路待显示视频流的标识信息,获取与该待显示视频流的标识信息对应的多路处理视频流,该多路处理视频流的分辨率互不相同,该多路处理视频流中每路处理视频流与该待显示视频流具有相同的图像内容;将该多路处理视频流中与该待显示视频流的分辨率相同的处理视频流作为该待显示视频流。在本申请实施例中,同一标识信息可以对应有一路或多路处理视频流,换言之,多路处理视频流的标识信息可以相同。具体的,具有相同图像内容的不同处理视频流的标识信息可以相同。处理视频流可以包括多帧图像,不同处理视频流具有相同的图像内容是指各路处理视频流中对应的图像均具有相同的图像内容。例如,图2b中,待处理图像、处理图像1和处理图像2具有相同的图像内容,则待处理图像所属的待处理视频流、处理图像1所属的处理视频流1和处理图像2所属的处理视频流2的标识信息可以相同。需要说明的是,标识信息相同的多个处理视频流可以是由第二视频处理设备对同一待处理视频流进行分辨率调整之后得到的(参见图2a中步骤S203的具体描述)。
针对前述至少两路待显示视频流中每路待显示视频流,第一视频处理设备获取与该待显示视频流的标识信息对应的多路处理视频流之后,由于该多路处理视频流的分辨率互不相同,因此,第一视频处理设备可以将该多路处理视频流中与该待显示视频流的分辨率相同的处理视频流作为该待显示视频流。其中,与该待显示视频流的标识信息对应的多路处理视频流可以存储于第一视频处理设备的本地数据库,此时,第一视频处理设备可以从本地数据库中获取与该待显示视频流的标识信息对应的多路处理视频流。或者,第一视频处理设备可以向服务设备发送处理视频流获取请求,该处理视频流获取请求可以包括前述至少两路待显示视频流的标识信息;并接收该服务设备返回的与前述至少两路待显示视频流中各路待显示视频流的标识信息对应的多路处理视频流。其中,服务设备的数量可以为一个或多个。
在一种实现方式中,第一视频处理设备根据视频布局参数,获取前述至少两路待显示视频流的具体实施方式还可以为:针对前述至少两路待显示视频流中每路待显示视频流的标识信息,向第三服务设备发送携带有该标识信息的索引获取请求,并接收该第三服务设备返回的与该标识信息对应的多路处理视频流的索引以及每路处理视频流的分辨率;第一视频处理设备从该多路处理视频流的索引中确定目标索引,该目标索引对应的处理视频流的分辨率与该待显示视频流的分辨率相同;向第三服务设备发送携带有该目标索引的流获取请求,并接收第三服务设备返回的该目标索引对应的处理视频流,将该目标索引对应的处理视频流作为该待显示视频流。相较于获取与该待显示视频流的标识信息对应的多路处理视频流并从中确定该待显示视频流的方式,通过获取各路处理视频流的索引以及每个索引对应的处理视频流的分辨率以确定待显示视频流,可以减少第一视频处理设备和第三服务设备之间传输的数据量。其中,第一服务设备、第二服务设备和第三服务设备均可以为图1中的服务设备103。
步骤S503:第一视频处理设备将前述至少两路待显示视频流合成一路目标视频流,并在该终端上显示该目标视频流。
具体的,第一视频处理设备获取前述至少两路待显示视频流之后,可以将前述至少两路待显示视频流合成为一路目标视频流,并在该终端上显示该目标视频流。由于该目标视频流由前述至少两路待显示视频流合成,因此显示该目标视频流时该终端中呈现的画面由至少两个子画面拼接而成,使得用户可以在终端中同时观看多个子画面。需要说明的是,第一视频处理设备可以与终端集成于同一物理实体或者分别集成于不同的物理实体。当第一视频处理设备与终端集成于不同的物理实体时,第一视频处理设备合成目标视频流之后,可以将该目标视频流发送至该终端进行显示。
在本申请实施例中,当终端所需显示的至少两路待显示视频流变化时,即视频布局参数指示的该终端所需显示的至少两路待显示视频流的标识信息变化时,例如,终端所需显示的待显示视频流的数量增加或减少,或者,终端所需显示的至少两路待显示视频流中的部分或者全部待显示视频流的标识信息发生变化。第一视频处理设备可以获取变化的(或者新增的)标识信息所指示的待显示视频流,进而将新获取的待显示视频流和未变化的标识信息指示的待显示视频流合成一路目标视频流。在该过程中,第一视频处理设备可以不用重新获取未变化的(或者非新增的)标识信息所指示的待显示视频流,通过这种方式,有利于提高待显示视频流的利用率。例如,终端所需显示的两路待显示视频流由待显示视频流1和待显示视频流2变化为待显示视频流1和待显示视频流3时,由于在对待显示视频流1和待显示视频流2进行合成之前已经获取到待显示视频流1,因此,第一视频处理设备仅需获取待显示视频流3,即可对待显示视频流1和待显示视频流3进行合成。在将待显示视频流1和待显示视频流3合成为目标视频流的过程中复用了待显示视频流1,提高了待显示视频流1的利用率。
在一种实现方式中,第一视频处理设备得到目标视频流之后,可以对目标视频流进行封装处理,并将封装之后的目标视频流发送至终端。目标视频流是一路视频流,因此终端接收到封装后的目标视频流之后,执行一次解封装操作且终端仅需一个视频播放器,即可实现显示多个子画面的目的。
在一种实现方式中,每路待显示视频流可以包括多帧图像,每帧图像可以携带有播放时间(关于播放时间的描述可以参见图3a中步骤S303的描述)。第一视频处理设备将前述至少两路待显示视频流合成一路目标视频流的具体实施方式可以为:将前述至少两路待显示视频流中播放时间相同的图像合成为一帧目标图像,所有目标图像组成一路目标视频流。其中,播放时间相同的图像即为在同一时间下采集得到的图像。通过这种方式,可以确保组成目标图像的各帧图像是在同一时间下采集得到的,即可以确保在终端中显示目标视频流时,在该终端中同时显示的多个子画面是同一时间下的画面。
在一种实现方式中,视频布局参数还可以指示终端所需显示的至少两路待显示视频流在该终端中显示时的显示位置。该显示位置可以为待显示视频流在终端的显示设备中的位置,例如待显示视频流在该显示设备中占据的坐标区域。第一视频处理设备可以根据终端所需显示的各路待显示视频流在该终端中显示时的显示位置,将该终端所需显示的至少两路待显示视频流合成一路目标视频流。例如,当视频布局参数指示终端需要显示待显示视频流1、待显示视频流2和待显示视频流3,且该视频布局参数还指示待显示视频流1、待显示视频流2和待显示视频流3在该终端中显示时的显示位置分别为左侧、右上角和右下 角时,将待显示视频流1、待显示视频流2和待显示视频流3合成一路目标视频流的场景示意图可以如图5b所示。
在一种实现方式中,前述至少两路待显示视频流可以包括第一待显示视频流和第二待显示视频流;若该第一待显示视频流的分辨率高于该第二待显示视频流的分辨率,则该第一待显示视频流在终端中占据的显示面积可以大于第二待显示视频流在该终端中占据的显示面积。可以理解的是,相较于在终端中占据显示面积较小的视频流,用户对于该终端中占据显示面积较大的视频流的关注度较高。通过这种方式,可以使得在终端中占据的显示面积更大的视频流的分辨率更高,即在终端中占据的显示面积更大的视频流更清晰,这样有利于提高用户体验。
通过实施本申请实施例,有利于在终端中显示由至少两路待显示视频流合成的目标视频流,显示该目标视频流时该终端中呈现的画面由至少两个子画面拼接而成。另一方面,由于目标视频流是一路视频流,因此终端接收到封装后的目标视频流之后,执行一次解封装操作且终端仅需一个视频播放器,即可实现显示多个子画面的目的。
请参见图6,图6是本申请实施例提供的又一种视频处理方法的流程示意图。该方法详细描述了如何获取与待显示视频流的标识信息和分辨率对应的多路子视频流,以及如何将该多路子视频流合成为该待显示视频流。其中,步骤S601~步骤S605的执行主体为第一视频处理设备,或者为第一视频处理设备中的芯片,以下以第一视频处理设备为视频处理方法的执行主体为例进行说明。该方法可以包括但不限于如下步骤:
步骤S601:第一视频处理设备获取终端的视频布局参数,该视频布局参数用于指示该终端所需显示的至少两路待显示视频流的标识信息以及每路待显示视频流的分辨率。
需要说明的是,步骤S601的执行过程可参见图5a中步骤S501的具体描述,此处不再赘述。
步骤S602:针对前述至少两路待显示视频流中每路待显示视频流的标识信息,第一视频处理设备向第二服务设备发送子视频流获取请求,该子视频流获取请求包括该待显示视频流的标识信息和分辨率。
在本申请实施例中,第一视频处理设备可以根据视频布局参数获取终端所需显示的至少两路待显示视频流,每个待显示视频流可以由多路子视频流组成。第一视频处理设备向第二服务设备发送的子视频流获取请求,可以用于请求获取组成终端所需显示的各路待显示视频流的子视频流。其中,第二服务设备的数量可以为一个或多个,换言之,组成同一待显示视频流的不同子视频流可以来自于相同或者不同的第二服务设备,每个第二服务设备可以存储有组成待显示视频流的部分或者全部子视频流。通过这种方式,第一视频处理设备可以并行地从不同的第二服务设备中获取不同的子视频流,以组成完整的待显示视频流,从而有利于提高待显示视频流的获取效率。
需要说明的是,子视频流获取请求中的分辨率指第一视频处理设备所需获取的多路子视频流组成的待显示视频流的分辨率。例如,当标识信息1对应待显示视频流1和待显示视频流2,待显示视频流1和待显示视频流2的分辨率分别为1000x1000、500x500,且待显示视频流1由子视频流1、子视频流2和子视频流3组成,待显示视频流2由子视频流4 和子视频流5组成时,若第一视频处理设备发送的子视频流获取请求包括标识信息1和分辨率1000x1000,则第一视频处理设备可以接收到子视频流1、子视频流2和子视频流3。
步骤S603:第一视频处理设备接收该第二服务设备返回的与该待显示视频流的标识信息和分辨率对应的多路子视频流。
具体的,第一视频处理设备可以接收一个或多个第二服务设备返回的与该待显示视频流的标识信息和分辨率对应的多路子视频流。
步骤S604:第一视频处理设备将该多路子视频流合成为该待显示视频流。
具体的,第一视频处理设备接收到与该待显示视频流的标识信息和分辨率对应的多路子视频流之后,可以将该多路子视频流合成为该待显示视频流。需要说明的是,用于组成同一待显示视频流的每路子视频流可以包括多帧图像,且每路子视频流包括的图像数量相同。将该多路子视频流合成为该待显示视频流的具体实施方式可以为:按照多路子视频流中各路子视频流中的图像帧的顺序,将各路子视频流中图像帧的顺序相同的图像拼接为待显示图像,所有待显示图像组成该待显示视频流。
在一种实现方式中,每路子视频流中的每帧图像可以携带有播放时间(关于播放时间的描述可以参见图3a中步骤S303的描述)。第一视频处理设备将该多路子视频流合成为该待显示视频流的具体实施方式可以为:将该多路子视频流中播放时间相同的图像合成为一帧待显示图像,所有待显示图像组成该待显示视频流。
其中,该待显示视频流对应的多路子视频流可以是第二视频处理设备通过对处理视频流进行划分得到,该处理视频流即为该待显示视频流。在一种实现方式中,每路子视频流可以携带有该子视频流在对应的处理视频流中的位置信息,第一视频处理设备可以根据获取的各个子视频流在对应的处理视频流中的位置信息,将获取的多路子视频流合成为待显示视频流。通过这种方式,有利于准确、快速地合成待显示视频流。子视频流在对应的处理视频流中的位置信息可以指示该子视频流位于该处理视频流中的上侧(中间、下侧、左侧或者右侧),或者,可以指示该子视频流位于该处理视频流对应的坐标系中的坐标。
步骤S605:第一视频处理设备将前述至少两路待显示视频流合成一路目标视频流,并在该终端上显示该目标视频流。
需要说明的是,步骤S605的执行过程可参见图5a中步骤S503的具体描述,此处不再赘述。
在本申请实施例中,当终端所需显示的待显示视频流由多路子视频流组成时,通过对该多路子视频流进行合成处理,可以合成得到完整的待显示视频流。然后可以将合成的至少两路待显示视频流合成为用户希望在终端上显示的目标视频流,显示该目标视频流时该终端中呈现的画面由至少两个子画面拼接而成。通过这种方式,使得用户可以在终端中同时观看至少两个子画面。
上述详细阐述了本申请实施例公开的方法,下面将提供本申请实施例的装置。
请参见图7,图7是本申请实施例提供的一种第一视频处理装置的结构示意图,该装置可以为第一视频处理设备或具有第一视频处理设备功能的装置(例如芯片),第一视频处理装置70用于执行图5a-图6对应的方法实施例中第一视频处理设备所执行的步骤,第 一视频处理装置70包括:
获取模块701,用于获取终端的视频布局参数,该视频布局参数用于指示该终端所需显示的至少两路待显示视频流的标识信息以及每路待显示视频流的分辨率;
获取模块701,还用于根据该视频布局参数,获取前述至少两路待显示视频流;
处理模块702,用于将所述至少两路待显示视频流合成一路目标视频流,并在所述终端上显示所述目标视频流。
在一种实现方式中,获取模块701用于获取终端的视频布局参数时,具体可以用于接收该终端发送的视频流合成请求,该视频流合成请求包括该终端的视频布局参数。
在一种实现方式中,获取模块701用于根据该视频布局参数,获取前述至少两路待显示视频流时,具体可以用于向第一服务设备发送视频流获取请求,该视频流获取请求包括前述至少两路待显示视频流的标识信息以及每路待显示视频流的分辨率;接收该第一服务设备返回的前述至少两路待显示视频流。
在一种实现方式中,获取模块701用于根据该视频布局参数,获取前述至少两路待显示视频流时,具体用于针对前述至少两路待显示视频流中每路待显示视频流的标识信息,获取与该待显示视频流的标识信息对应的多路处理视频流,该多路处理视频流的分辨率互不相同,该多路处理视频流中每路处理视频流与该待显示视频流具有相同的图像内容;将该多路处理视频流中与该待显示视频流的分辨率相同的处理视频流作为该待显示视频流。
在一种实现方式中,获取模块701用于根据该视频布局参数,获取前述至少两路待显示视频流时,具体用于针对前述至少两路待显示视频流中每路待显示视频流的标识信息,向第二服务设备发送子视频流获取请求,该子视频流获取请求包括该待显示视频流的标识信息和分辨率;接收该第二服务设备返回的与该待显示视频流的标识信息和分辨率对应的多路子视频流;将该多路子视频流合成为该待显示视频流。
在一种实现方式中,每路待显示视频流包括多帧图像,每帧图像携带有播放时间;处理模块702用于将前述至少两路待显示视频流合成一路目标视频流时,具体可以用于将前述至少两路待显示视频流中播放时间相同的图像合成为一帧目标图像,所有目标图像组成一路目标视频流。
需要说明的是,图7对应的实施例中未提及的内容以及各个模块执行步骤的具体实现方式可参见图5a-图6所示实施例以及前述内容,这里不再赘述。
在一种实现方式中,图7中的各个模块所实现的相关功能可以结合处理器与通信接口来实现。参见图8,图8是本申请实施例提供的另一种第一视频处理装置的结构示意图,该装置可以为第一视频处理设备或具有第一视频处理设备功能的装置(例如芯片),该第一视频处理装置80可以包括通信接口801、处理器802和存储器803,通信接口801、处理器802和存储器803可以通过一条或多条通信总线相互连接,也可以通过其它方式相连接。图7所示的获取模块701和处理模块702所实现的相关功能可以通过同一个处理器802来实现,也可以通过多个不同的处理器802来实现。
通信接口801可以用于发送数据和/或信令,以及接收数据和/或信令。应用在本申请实施例中,通信接口801可以用于接收终端发送的视频流合成请求。通信接口801可以为收发器。
处理器802被配置为执行图5a-图6所述方法中第一视频处理设备相应的功能。该处理器802可以包括一个或多个处理器,例如该处理器802可以是一个或多个中央处理器(central processing unit,CPU),网络处理器(network processor,NP),硬件芯片或者其任意组合。在处理器802是一个CPU的情况下,该CPU可以是单核CPU,也可以是多核CPU。
存储器803用于存储程序代码等。存储器803可以包括易失性存储器(volatile memory),例如随机存取存储器(random access memory,RAM);存储器803也可以包括非易失性存储器(non-volatile memory),例如只读存储器(read-only memory,ROM),快闪存储器(flash memory),硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD);存储器803还可以包括上述种类的存储器的组合。需要说明的是,第一视频处理装置80包括存储器803仅用于举例,并不构成对本申请实施例限定,在一种实现方式中,存储器803可以用其他具备存储功能的存储介质替代。
处理器802可以调用存储器803中存储的程序代码以使第一视频处理装置80执行以下操作:
获取终端的视频布局参数,该视频布局参数用于指示该终端所需显示的至少两路待显示视频流的标识信息以及每路待显示视频流的分辨率;
根据该视频布局参数,获取前述至少两路待显示视频流;
将所述至少两路待显示视频流合成一路目标视频流,并在所述终端上显示所述目标视频流。
在一种实现方式中,处理器802调用存储器803中存储的程序代码以使第一视频处理装置80执行获取终端的视频布局参数时,具体可以使第一视频处理装置80执行以下操作:接收终端发送的视频流合成请求,该视频流合成请求包括该终端的视频布局参数。
在一种实现方式中,处理器802调用存储器803中存储的程序代码以使第一视频处理装置80执行根据视频布局参数,获取前述至少两路待显示视频流时,具体可以使第一视频处理装置80执行以下操作:向第一服务设备发送视频流获取请求,所述视频流获取请求包括所述至少两路待显示视频流的标识信息以及每路待显示视频流的分辨率;接收所述第一服务设备返回的所述至少两路待显示视频流。
在一种实现方式中,处理器802调用存储器803中存储的程序代码以使第一视频处理装置80执行根据视频布局参数,获取前述至少两路待显示视频流时,具体可以使第一视频处理装置80执行以下操作:针对前述至少两路待显示视频流中每路待显示视频流的标识信息,获取与该待显示视频流的标识信息对应的多路处理视频流,该多路处理视频流的分辨率互不相同,该多路处理视频流中每路处理视频流与该待显示视频流具有相同的图像内容;将该多路处理视频流中与该待显示视频流的分辨率相同的处理视频流作为该待显示视频流。
在一种实现方式中,处理器802调用存储器803中存储的程序代码以使第一视频处理装置80执行根据视频布局参数,获取前述至少两路待显示视频流时,具体可以使第一视频处理装置80执行以下操作:针对前述至少两路待显示视频流中每路待显示视频流的标识信息,向第二服务设备发送子视频流获取请求,该子视频流获取请求包括该待显示视频流的 标识信息和分辨率;接收该第二服务设备返回的与该待显示视频流的标识信息和分辨率对应的多路子视频流;将该多路子视频流合成为该待显示视频流。
在一种实现方式中,每路待显示视频流包括多帧图像,每帧图像携带有播放时间;处理器802调用存储器803中存储的程序代码以使第一视频处理装置80执行将前述至少两路待显示视频流合成一路目标视频流时,具体可以使第一视频处理装置80执行以下操作:将前述至少两路待显示视频流中播放时间相同的图像合成为一帧目标图像,所有目标图像组成一路目标视频流。
进一步地,处理器802还可以调用存储器803中存储的程序代码以使第一视频处理装置80执行图5a-图6所示实施例中第一视频处理设备对应的操作,具体可参见方法实施例中的描述,此处不再赘述。
请参见图9,图9是本申请实施例提供的一种第二视频处理装置的结构示意图,该装置可以为第二视频处理设备或具有第二视频处理设备功能的装置(例如芯片),第二视频处理装置90用于执行图2a-图4a对应的方法实施例中第二视频处理设备所执行的步骤,第二视频处理装置90可以包括:
确定模块901,用于确定至少两种分辨率;
获取模块902,用于获取待处理视频流;
分辨率调整模块903,用于对该待处理视频流进行分辨率调整,得到至少两路处理视频流;其中,该至少两路处理视频流的分辨率互不相同,该至少两路处理视频流中的每路处理视频流的分辨率与前述至少两种分辨率中的一种分辨率相同,该至少两路处理视频流中的每路处理视频流与该待处理视频流具有相同的图像内容。
在一种实现方式中,前述至少两种分辨率是预先设置的。
在一种实现方式中,确定模块701用于确定至少两种分辨率时,具体可以用于接收服务设备发送的第一指令,该第一指令用于指示前述至少两种分辨率。
在一种实现方式中,该待处理视频流的数量为至少两路,每路待处理视频流包括多帧图像,每帧图像携带有采集时间;第二视频处理装置90还可以包括处理模块904,用于对前述至少两路待处理视频流中采集时间处于同一同步窗口内的至少两帧图像进行同步处理,同步处理后采集时间处于同一同步窗口内的至少两帧图像均携带有相同的播放时间。
在一种实现方式中,第二视频处理装置90还可以包括划分模块905;获取模块902,还可以用于获取与前述至少两种分辨率中每种分辨率对应的视频划分信息;划分模块905,可以用于针对前述至少两路处理视频流中的每路处理视频流,根据与该处理视频流的分辨率对应的视频划分信息,将该处理视频流划分为多路子视频流。
需要说明的是,图9对应的实施例中未提及的内容以及各个模块执行步骤的具体实现方式可参见图2a-图4a所示实施例以及前述内容,这里不再赘述。
在一种实现方式中,图9中的各个模块所实现的相关功能可以结合处理器与通信接口来实现。参见图10,图10是本申请实施例提供的另一种第二视频处理装置的结构示意图,该装置可以为第二视频处理设备或具有第二视频处理设备功能的装置(例如芯片),该第二视频处理装置100可以包括通信接口1001、处理器1002和存储器1003,通信接口1001、 处理器1002和存储器1003可以通过一条或多条通信总线相互连接,也可以通过其它方式相连接。图9所示的确定模块901、获取模块902、分辨率调整模块903、处理模块904和划分模块905所实现的相关功能可以通过同一个处理器1002来实现,也可以通过多个不同的处理器1002来实现。
通信接口1001可以用于发送数据和/或信令,以及接收数据和/或信令。应用在本申请实施例中,通信接口1001可以用于接收服务设备发送的第一指令。通信接口1001可以为收发器。
处理器1002被配置为执行图2a-图4a所述方法中第二视频处理设备相应的功能。该处理器1002可以包括一个或多个处理器,例如该处理器1002可以是一个或多个中央处理器(central processing unit,CPU),网络处理器(network processor,NP),硬件芯片或者其任意组合。在处理器1002是一个CPU的情况下,该CPU可以是单核CPU,也可以是多核CPU。
存储器1003用于存储程序代码等。存储器1003可以包括易失性存储器(volatile memory),例如随机存取存储器(random access memory,RAM);存储器1003也可以包括非易失性存储器(non-volatile memory),例如只读存储器(read-only memory,ROM),快闪存储器(flash memory),硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD);存储器1003还可以包括上述种类的存储器的组合。需要说明的是,第二视频处理装置100包括存储器1003仅用于举例,并不构成对本申请实施例限定,在一种实现方式中,存储器1003可以用其他具备存储功能的存储介质替代。
处理器1002可以调用存储器1003中存储的程序代码以使第二视频处理装置100执行以下操作:
确定至少两种分辨率;
获取待处理视频流;
对该待处理视频流进行分辨率调整,得到至少两路处理视频流;其中,该至少两路处理视频流的分辨率互不相同,该至少两路处理视频流中的每路处理视频流的分辨率与前述至少两种分辨率中的一种分辨率相同,该至少两路处理视频流中的每路处理视频流与该待处理视频流具有相同的图像内容。
在一种实现方式中,前述至少两种分辨率是预先设置的。
在一种实现方式中,处理器1002调用存储器1003中存储的程序代码以使第二视频处理装置100执行确定至少两种分辨率时,具体可以使第二视频处理装置100执行以下操作:接收服务设备发送的第一指令,该第一指令用于指示前述至少两种分辨率。
在一种实现方式中,该待处理视频流的数量为至少两路,每路待处理视频流包括多帧图像,每帧图像携带有采集时间;处理器1002还可以调用存储器1003中存储的程序代码以使第二视频处理装置100执行以下操作:对前述至少两路待处理视频流中采集时间处于同一同步窗口内的至少两帧图像进行同步处理,同步处理后采集时间处于同一同步窗口内的至少两帧图像均携带有相同的播放时间。
在一种实现方式中,处理器1002还可以调用存储器1003中存储的程序代码以使第二视频处理装置100执行以下操作:获取与前述至少两种分辨率中每种分辨率对应的视频划 分信息;针对前述至少两路处理视频流中的每路处理视频流,根据与该处理视频流的分辨率对应的视频划分信息,将该处理视频流划分为多路子视频流。
进一步地,处理器1002还可以调用存储器1003中存储的程序代码以使第二视频处理装置100执行图2a-图4a所示实施例中第二视频处理设备对应的操作,具体可参见方法实施例中的描述,在此不再赘述。
本申请实施例还提供一种视频处理系统,该视频处理系统包括前述如图7所示的第一视频处理装置和前述如图9所示的第二视频处理装置,或者,该视频处理系统包括前述如图8所示的第一视频处理装置和前述如图10所示的第二视频处理装置。
本申请实施例还提供一种计算机可读存储介质,可以用于存储图7所示实施例中第一视频处理装置所用的计算机软件指令,其包含用于执行上述实施例中为第一视频处理设备所设计的程序。
本申请实施例还提供一种计算机可读存储介质,可以用于存储图9所示实施例中第二视频处理装置所用的计算机软件指令,其包含用于执行上述实施例中为第二视频处理设备所设计的程序。
上述计算机可读存储介质包括但不限于快闪存储器、硬盘、固态硬盘。
本申请实施例还提供一种计算机程序产品,该计算机产品被计算设备运行时,可以执行上述图5a-图6实施例中为第一视频处理设备所设计的方法。
本申请实施例还提供一种计算机程序产品,该计算机产品被计算设备运行时,可以执行上述图2a-图4a实施例中为第二视频处理设备所设计的方法。
在本申请实施例中还提供一种芯片,包括处理器和存储器,该存储器用包括处理器和存储器,该存储器用于存储计算机程序,该处理器用于从存储器中调用并运行该计算机程序,该计算机程序用于实现上述方法实施例中的方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机程序。在计算机上加载和执行所述计算机程序时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机程序可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机程序可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,高密度数字视频光盘(digital video disc,DVD))、或者半导体介质(例如,固态硬盘(solid state disk,SSD))等。
本领域普通技术人员可以理解:本申请中涉及的第一、第二等各种数字编号仅为描述方便进行的区分,并不用来限制本申请实施例的范围,也不表示先后顺序。
本申请中的至少一个还可以描述为一个或多个,至少两个还可以描述两个或两个以上。多个可以是两个、三个、四个或者更多个,本申请不做限制。在本申请实施例中,对于一种技术特征,通过“第一”、“第二”、“第三”、“A”、“B”、“C”和“D”等区分该种技术特征中的技术特征,该“第一”、“第二”、“第三”、“A”、“B”、“C”和“D”描述的技术特征间无先后顺序或者大小顺序。
本申请中各表所示的对应关系可以被配置,也可以是预定义的。各表中的信息的取值仅仅是举例,可以配置为其他值,本申请并不限定。在配置信息与各参数的对应关系时,并不一定要求必须配置各表中示意出的所有对应关系。例如,本申请中的表格中,某些行示出的对应关系也可以不配置。又例如,可以基于上述表格做适当的变形调整,例如,拆分,合并等等。上述各表中标题示出参数的名称也可以采用通信装置可理解的其他名称,其参数的取值或表示方式也可以通信装置可理解的其他取值或表示方式。上述各表在实现时,也可以采用其他的数据结构,例如可以采用数组、队列、容器、栈、线性表、指针、链表、树、图、结构体、类、堆、散列表或哈希表等。
本申请中的预定义可以理解为定义、预先定义、存储、预存储、预协商、预配置、固化、或预烧制。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (26)

  1. 一种视频处理方法,其特征在于,应用于第一视频处理装置中,所述方法包括:
    获取终端的视频布局参数,所述视频布局参数用于指示所述终端所需显示的至少两路待显示视频流的标识信息以及每路待显示视频流的分辨率;
    根据所述视频布局参数,获取所述至少两路待显示视频流;
    将所述至少两路待显示视频流合成一路目标视频流,并在所述终端上显示所述目标视频流。
  2. 如权利要求1所述的方法,其特征在于,所述获取终端的视频布局参数,包括:
    接收所述终端发送的视频流合成请求,所述视频流合成请求包括所述终端的视频布局参数。
  3. 如权利要求1或2所述的方法,其特征在于,所述根据所述视频布局参数,获取所述至少两路待显示视频流,包括:
    向第一服务设备发送视频流获取请求,所述视频流获取请求包括所述至少两路待显示视频流的标识信息以及每路待显示视频流的分辨率;
    接收所述第一服务设备返回的所述至少两路待显示视频流。
  4. 如权利要求1或2所述的方法,其特征在于,所述根据所述视频布局参数,获取所述至少两路待显示视频流,包括:
    针对所述至少两路待显示视频流中每路待显示视频流的标识信息,获取与所述待显示视频流的标识信息对应的多路处理视频流,所述多路处理视频流的分辨率互不相同,所述多路处理视频流中每路处理视频流与所述待显示视频流具有相同的图像内容;
    将所述多路处理视频流中与所述待显示视频流的分辨率相同的处理视频流作为所述待显示视频流。
  5. 如权利要求1或2所述的方法,其特征在于,所述根据所述视频布局参数,获取所述至少两路待显示视频流,包括:
    针对所述至少两路待显示视频流中每路待显示视频流的标识信息,向第二服务设备发送子视频流获取请求,所述子视频流获取请求包括所述待显示视频流的标识信息和分辨率;
    接收所述第二服务设备返回的与所述待显示视频流的标识信息和分辨率对应的多路子视频流;
    将所述多路子视频流合成为所述待显示视频流。
  6. 如权利要求1~5任一项所述的方法,其特征在于,所述每路待显示视频流包括多帧图像,每帧图像携带有播放时间;所述将所述至少两路待显示视频流合成一路目标视频流,包括:
    将所述至少两路待显示视频流中播放时间相同的图像合成为一帧目标图像,所有目标图像组成一路目标视频流。
  7. 一种视频处理方法,其特征在于,应用于第二视频处理装置中,所述方法包括:
    确定至少两种分辨率;
    获取待处理视频流;
    对所述待处理视频流进行分辨率调整,得到至少两路处理视频流;其中,所述至少两路处理视频流的分辨率互不相同,所述至少两路处理视频流中的每路处理视频流的分辨率与所述至少两种分辨率中的一种分辨率相同,所述至少两路处理视频流中的每路处理视频流与所述待处理视频流具有相同的图像内容。
  8. 如权利要求7所述的方法,其特征在于,所述至少两种分辨率是预先设置的。
  9. 如权利要求7所述的方法,其特征在于,所述确定至少两种分辨率,包括:
    接收服务设备发送的第一指令,所述第一指令用于指示所述至少两种分辨率。
  10. 如权利要求7~9任一项所述的方法,其特征在于,所述待处理视频流的数量为至少两路,每路待处理视频流包括多帧图像,每帧图像携带有采集时间;
    所述对所述待处理视频流进行分辨率调整之前,所述方法还包括:
    对所述至少两路待处理视频流中采集时间处于同一同步窗口内的至少两帧图像进行同步处理,同步处理后所述采集时间处于同一同步窗口内的至少两帧图像均携带有相同的播放时间。
  11. 如权利要求7~10任一项所述的方法,其特征在于,所述方法还包括:
    获取与所述至少两种分辨率中每种分辨率对应的视频划分信息;
    针对所述至少两路处理视频流中的每路处理视频流,根据与所述处理视频流的分辨率对应的视频划分信息,将所述处理视频流划分为多路子视频流。
  12. 一种第一视频处理装置,其特征在于,包括:
    获取模块,用于获取终端的视频布局参数,所述视频布局参数用于指示所述终端所需显示的至少两路待显示视频流的标识信息以及每路待显示视频流的分辨率;
    所述获取模块,还用于根据所述视频布局参数,获取所述至少两路待显示视频流;
    处理模块,用于将所述至少两路待显示视频流合成一路目标视频流,并在所述终端上显示所述目标视频流。
  13. 如权利要求12所述的装置,其特征在于,
    所述获取模块用于获取终端的视频布局参数时,具体用于接收所述终端发送的视频流合成请求,所述视频流合成请求包括所述终端的视频布局参数。
  14. 如权利要求12或13所述的装置,其特征在于,
    所述获取模块用于根据所述视频布局参数,获取所述至少两路待显示视频流时,具体用于向第一服务设备发送视频流获取请求,所述视频流获取请求包括所述至少两路待显示视频流的标识信息以及每路待显示视频流的分辨率;接收所述第一服务设备返回的所述至少两路待显示视频流。
  15. 如权利要求12或13所述的装置,其特征在于,
    所述获取模块用于根据所述视频布局参数,获取所述至少两路待显示视频流时,具体用于针对所述至少两路待显示视频流中每路待显示视频流的标识信息,获取与所述待显示视频流的标识信息对应的多路处理视频流,所述多路处理视频流的分辨率互不相同,所述多路处理视频流中每路处理视频流与所述待显示视频流具有相同的图像内容;将所述多路处理视频流中与所述待显示视频流的分辨率相同的处理视频流作为所述待显示视频流。
  16. 如权利要求12或13所述的装置,其特征在于,
    所述获取模块用于根据所述视频布局参数,获取所述至少两路待显示视频流时,具体用于针对所述至少两路待显示视频流中每路待显示视频流的标识信息,向第二服务设备发送子视频流获取请求,所述子视频流获取请求包括所述待显示视频流的标识信息和分辨率;接收所述第二服务设备返回的与所述待显示视频流的标识信息和分辨率对应的多路子视频流;将所述多路子视频流合成为所述待显示视频流。
  17. 如权利要求12~16任一项所述的装置,其特征在于,所述每路待显示视频流包括多帧图像,每帧图像携带有播放时间;
    处理模块,用于将所述至少两路待显示视频流合成一路目标视频流时,具体用于将所述至少两路待显示视频流中播放时间相同的图像合成为一帧目标图像,所有目标图像组成一路目标视频流。
  18. 一种第二视频处理装置,其特征在于,包括:
    确定模块,用于确定至少两种分辨率;
    获取模块,用于获取待处理视频流;
    分辨率调整模块,用于对所述待处理视频流进行分辨率调整,得到至少两路处理视频流;其中,所述至少两路处理视频流的分辨率互不相同,所述至少两路处理视频流中的每路处理视频流的分辨率与所述至少两种分辨率中的一种分辨率相同,所述至少两路处理视频流中的每路处理视频流与所述待处理视频流具有相同的图像内容。
  19. 如权利要求18所述的装置,其特征在于,所述至少两种分辨率是预先设置的。
  20. 如权利要求18所述的装置,其特征在于,
    所述确定模块用于确定至少两种分辨率时,具体用于接收服务设备发送的第一指令,所述第一指令用于指示所述至少两种分辨率。
  21. 如权利要求18~20任一项所述的装置,其特征在于,所述待处理视频流的数量为至少两路,每路待处理视频流包括多帧图像,每帧图像携带有采集时间;
    所述第二视频处理装置还可以包括处理模块,用于对所述至少两路待处理视频流中采集时间处于同一同步窗口内的至少两帧图像进行同步处理,同步处理后所述采集时间处于同一同步窗口内的至少两帧图像均携带有相同的播放时间。
  22. 如权利要求18~21任一项所述的装置,其特征在于,所述第二视频处理装置还可以包括划分模块;
    所述获取模块,还用于获取与所述至少两种分辨率中每种分辨率对应的视频划分信息;
    所述划分模块,用于针对所述至少两路处理视频流中的每路处理视频流,根据与所述处理视频流的分辨率对应的视频划分信息,将所述处理视频流划分为多路子视频流。
  23. 一种第一视频处理装置,其特征在于,所述装置包括处理器和存储介质,所述存储介质存储有指令,所述指令被所述处理器运行时,使得所述装置执行权利要求1~6任一项所述的方法。
  24. 一种第二视频处理装置,其特征在于,所述装置包括处理器和存储介质,所述存储介质存储有指令,所述指令被所述处理器运行时,使得所述装置执行权利要求7~11任一项所述的方法。
  25. 一种视频处理系统,其特征在于,包括如权利要求12~17任一项所述的第一视频 处理装置和如权利要求18~22任一项所述的第二视频处理装置,或者,包括如权利要求23所述的第一视频处理装置和如权利要求24所述的第二视频处理装置。
  26. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令被处理器执行时使所述处理器执行如权利要求1~11任一项所述的方法。
PCT/CN2021/071220 2020-01-22 2021-01-12 一种视频处理方法及其装置 WO2021147702A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010076016.6A CN113163214A (zh) 2020-01-22 2020-01-22 一种视频处理方法及其装置
CN202010076016.6 2020-01-22

Publications (1)

Publication Number Publication Date
WO2021147702A1 true WO2021147702A1 (zh) 2021-07-29

Family

ID=76882048

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/071220 WO2021147702A1 (zh) 2020-01-22 2021-01-12 一种视频处理方法及其装置

Country Status (2)

Country Link
CN (1) CN113163214A (zh)
WO (1) WO2021147702A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114866692A (zh) * 2022-04-19 2022-08-05 合肥富煌君达高科信息技术有限公司 一种大分辨率监控相机的图像输出方法及系统
CN116112620A (zh) * 2023-01-17 2023-05-12 山东鲁软数字科技有限公司 一种提高视频流多路合并稳定性处理方法及系统

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113518260B (zh) * 2021-09-14 2022-05-03 腾讯科技(深圳)有限公司 视频播放方法、装置、电子设备及计算机可读存储介质
CN113824920A (zh) * 2021-09-30 2021-12-21 联想(北京)有限公司 一种处理方法及装置
WO2023070362A1 (zh) * 2021-10-27 2023-05-04 京东方科技集团股份有限公司 显示控制方法、装置、显示设备和计算机可读介质
CN114222162B (zh) * 2021-12-07 2024-04-12 浙江大华技术股份有限公司 视频处理方法、装置、计算机设备及存储介质
CN114172873B (zh) * 2021-12-13 2023-05-30 中国平安财产保险股份有限公司 分辩率调整方法、装置、服务器及计算机可读存储介质
CN115484494B (zh) * 2022-09-15 2024-04-02 云控智行科技有限公司 一种数字孪生视频流的处理方法、装置及设备
WO2024120009A1 (zh) * 2022-12-05 2024-06-13 华为云计算技术有限公司 一种多媒体处理系统、多媒体处理方法及相关设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7197070B1 (en) * 2001-06-04 2007-03-27 Cisco Technology, Inc. Efficient systems and methods for transmitting compressed video data having different resolutions
CN101159866A (zh) * 2007-06-28 2008-04-09 武汉恒亿电子科技发展有限公司 一种倍速传输数字视频数据的方法
CN101257607A (zh) * 2008-03-12 2008-09-03 中兴通讯股份有限公司 一种应用于视频会议的多画面处理系统和方法
CN101977305A (zh) * 2010-10-27 2011-02-16 北京中星微电子有限公司 一种视频处理方法及装置和系统
CN103039072A (zh) * 2010-05-25 2013-04-10 维德约股份有限公司 用于使用多个摄影机和多个监视器的可缩放视频通信的系统和方法
US20180012335A1 (en) * 2016-07-06 2018-01-11 Gopro, Inc. Systems and methods for multi-resolution image stitching
CN108605148A (zh) * 2016-02-09 2018-09-28 索尼互动娱乐股份有限公司 视频显示系统

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101753978A (zh) * 2009-12-31 2010-06-23 中兴通讯股份有限公司 一种实现多屏业务融合的方法及系统
CN202799004U (zh) * 2012-06-04 2013-03-13 深圳市景阳科技股份有限公司 一种视频播放终端及系统
CN103780920B (zh) * 2012-10-17 2018-04-27 华为技术有限公司 处理视频码流的方法及装置
CN105792021A (zh) * 2014-12-26 2016-07-20 乐视网信息技术(北京)股份有限公司 一种视频流的传输方法及装置
CN105338424B (zh) * 2015-10-29 2019-10-08 努比亚技术有限公司 一种视频处理方法及系统
CN105872569A (zh) * 2015-11-27 2016-08-17 乐视云计算有限公司 视频播放方法、装置及系统
CN109429037B (zh) * 2017-09-01 2021-06-29 杭州海康威视数字技术股份有限公司 一种图像处理方法、装置、设备及系统
CN108134918A (zh) * 2018-01-30 2018-06-08 苏州科达科技股份有限公司 视频处理方法、装置及多点视频处理单元、会议设备
CN109688483A (zh) * 2018-12-17 2019-04-26 北京爱奇艺科技有限公司 一种获取视频的方法、装置及电子设备
CN110401820A (zh) * 2019-08-15 2019-11-01 北京迈格威科技有限公司 多路视频处理方法、装置、介质及电子设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7197070B1 (en) * 2001-06-04 2007-03-27 Cisco Technology, Inc. Efficient systems and methods for transmitting compressed video data having different resolutions
CN101159866A (zh) * 2007-06-28 2008-04-09 武汉恒亿电子科技发展有限公司 一种倍速传输数字视频数据的方法
CN101257607A (zh) * 2008-03-12 2008-09-03 中兴通讯股份有限公司 一种应用于视频会议的多画面处理系统和方法
CN103039072A (zh) * 2010-05-25 2013-04-10 维德约股份有限公司 用于使用多个摄影机和多个监视器的可缩放视频通信的系统和方法
CN101977305A (zh) * 2010-10-27 2011-02-16 北京中星微电子有限公司 一种视频处理方法及装置和系统
CN108605148A (zh) * 2016-02-09 2018-09-28 索尼互动娱乐股份有限公司 视频显示系统
US20180012335A1 (en) * 2016-07-06 2018-01-11 Gopro, Inc. Systems and methods for multi-resolution image stitching

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114866692A (zh) * 2022-04-19 2022-08-05 合肥富煌君达高科信息技术有限公司 一种大分辨率监控相机的图像输出方法及系统
CN116112620A (zh) * 2023-01-17 2023-05-12 山东鲁软数字科技有限公司 一种提高视频流多路合并稳定性处理方法及系统

Also Published As

Publication number Publication date
CN113163214A (zh) 2021-07-23

Similar Documents

Publication Publication Date Title
WO2021147702A1 (zh) 一种视频处理方法及其装置
EP3562163B1 (en) Audio-video synthesis method and system
US11632571B2 (en) Media data processing method and apparatus
CN106992959B (zh) 一种3d全景音视频直播系统及音视频采集方法
US20150208103A1 (en) System and Method for Enabling User Control of Live Video Stream(s)
CN107040794A (zh) 视频播放方法、服务器、虚拟现实设备以及全景虚拟现实播放系统
US20200145736A1 (en) Media data processing method and apparatus
CN107534797B (zh) 一种增强媒体记录的方法和系统
KR20210029153A (ko) Vr 비디오 재생 방법, 단말, 및 서버
CN109547724B (zh) 一种视频流数据的处理方法、电子设备及存储装置
CN110035316B (zh) 处理媒体数据的方法和装置
US10728583B2 (en) Multimedia information playing method and system, standardized server and live broadcast terminal
TWI786572B (zh) 沉浸式媒體提供方法、獲取方法、裝置、設備及存儲介質
US20200226716A1 (en) Network-based image processing apparatus and method
CN113572975A (zh) 视频播放方法、装置及系统、计算机存储介质
CN109862385B (zh) 直播的方法、装置、计算机可读存储介质及终端设备
GB2526618A (en) Method for generating a screenshot of an image to be displayed by a multi-display system
US20220303518A1 (en) Code stream processing method and device, first terminal, second terminal and storage medium
KR20150030889A (ko) 멀티앵글영상서비스 제공 방법 및 시스템
KR102149004B1 (ko) 모바일 단말을 이용한 다채널 영상 생성 방법 및 장치
CN112925492A (zh) 多媒体上墙方法、客户端及监控平台
WO2019144076A1 (en) Panoramic picture in picture video
US10264241B2 (en) Complimentary video content
WO2022206168A1 (zh) 一种视频制作方法及系统
TWI822158B (zh) 沉浸式串流影像擷取與成像之裝置及方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21743907

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21743907

Country of ref document: EP

Kind code of ref document: A1