WO2022252797A1 - Video presentation method, electronic device, computer storage medium and program product - Google Patents

Video presentation method, electronic device, computer storage medium and program product Download PDF

Info

Publication number
WO2022252797A1
WO2022252797A1 PCT/CN2022/084913 CN2022084913W WO2022252797A1 WO 2022252797 A1 WO2022252797 A1 WO 2022252797A1 CN 2022084913 W CN2022084913 W CN 2022084913W WO 2022252797 A1 WO2022252797 A1 WO 2022252797A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
frame
shooting
client device
presented
Prior art date
Application number
PCT/CN2022/084913
Other languages
French (fr)
Chinese (zh)
Inventor
郑建明
王金波
侯哲
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202110837120.7A external-priority patent/CN115484486A/en
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022252797A1 publication Critical patent/WO2022252797A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering

Definitions

  • Embodiments of the present disclosure relate to the field of multimedia processing, and more specifically, to a video presentation method, electronic equipment, computer storage media, and program products.
  • Images captured by the distributed capture system can be presented on the client, but the current presentation method is relatively simple, resulting in poor user experience.
  • Embodiments of the present disclosure provide a solution for presenting a video to be presented on a client device based on a composite video from a central device.
  • a video presentation method includes: the client device receives the composite video from the central device, the i-th frame of the composite video is obtained based on the i-th frame of the video captured by multiple shooting devices in the distributed shooting system at the same time, and i is any positive An integer; the client device determines a video to be presented based on the composite video, the video to be presented is associated with at least one of the plurality of capture devices; and the client device renders the video to be presented.
  • the client device can determine the video to be presented for rendering based on the composite video from the central device, so that the video to be presented is no longer passively received, but determined by the client device, so that the video at the client device
  • the presentation is more flexible and diverse, thereby enhancing the user experience.
  • the method before the client device receives the composite video from the central device, the method further includes: establishing connections between the central device and multiple shooting devices respectively.
  • the central device establishes a wireless connection with each of the multiple shooting devices, and the central device and the multiple shooting devices are in the same local area network environment.
  • the client device determining the video to be presented based on the composite video includes determining each frame of the video to be presented through the following process: the client device determines from the ith frame of the composite video The i-th frame of the video shot by the shooting device, the target shooting device is the shooting device at the target position among the multiple shooting devices; and the client device determines the i-th frame of the video shot by the target shooting device as the i-th frame of the video to be presented frame.
  • the client device can determine and present the video shot by the target shooting device based on the composite video, which can simplify user operations.
  • the client device determining the video to be presented based on the composite video includes: the client device receives a user input instruction, and the user input instruction indicates a target shooting device; the client device determines the video to be presented through the following process For each frame of the video: determine the i-th frame of the video captured by the target shooting device from the i-th frame of the composite video; and determine the i-th frame of the video captured by the target shooting device as the i-th frame of the video to be presented.
  • the client device can determine and present the video captured by the target shooting device corresponding to the user input instruction based on the composite video based on the user input instruction, so that the user can view the video of interest as needed, so that the video at the shooting device
  • the presentation can be more diverse, which improves the user experience.
  • the method further includes: the client device receives a user's look-around operation for the current frame of the video to be presented; and in response to the look-around operation, the client device presents an The look-around image sequence of .
  • the client device can present a look-around image sequence based on the user's look-around viewing operation, so that the user can view the look-around effect more intuitively.
  • Such a variety of presentation methods can improve user experience.
  • the client device presenting the look-around image sequence includes: the client device responds to the look-around viewing operation, determining from the composite video a frame corresponding to the current frame of the video to be presented; The determined frame of the composite video corresponding to the current frame of the video to be presented is divided into multiple images respectively corresponding to multiple shooting devices; a surround-view image sequence is obtained based on the multiple images; and the client device presents the surround-view image sequence.
  • the look-around image sequence is obtained by using multiple images captured by multiple capture devices, which can make full use of each capture device in the distributed capture system and maximize resource utilization.
  • the number of the plurality of images is equal to the number of the plurality of photographing devices.
  • the client device obtaining the sequence of surround-view images based on the multiple images includes: the client device arranges the multiple images according to the sequence of positions of the multiple shooting devices to obtain the sequence of surround-view images.
  • obtaining the look-around image sequence according to the positions of multiple shooting devices can ensure the presentation effect of the look-around image sequence and avoid errors.
  • the client device obtaining the sequence of surround-view images based on the multiple images includes: the client device arranges the multiple images according to the positions of the multiple shooting devices; An intermediate frame is inserted between every two adjacent images of an image to obtain a look-around image sequence.
  • the i-th frame of the composite video is obtained by splicing the i-th frames respectively captured by multiple shooting devices at the same moment.
  • a video presentation method includes: the central device receives the videos taken by multiple shooting devices in the distributed shooting system; the central device obtains a composite video based on the videos taken by the multiple shooting devices respectively, and the i-th frame of the composite video is based on multiple shooting The i-th frame of the video captured by the devices at the same time, where i is any positive integer; and the central device sends the composite video to the client device.
  • obtaining the composite video by the central device includes determining each frame of the composite video through the following process: the central device splices the i-th frame of the video captured by multiple shooting devices at the same time, to get the i-th frame of the composite video.
  • the central device presenting the video captured by the specific capturing device in the distributed capturing system.
  • an apparatus for video presentation includes: a receiving module configured to receive a composite video from a central device, the i-th frame of the composite video is obtained based on the i-th frame of the video captured by multiple shooting devices in the distributed shooting system at the same time, i is any positive integer; the determination module is configured to determine the video to be presented based on the composite video, and the video to be presented is associated with at least one of the multiple shooting devices; and the presentation module is configured to present the video to be presented.
  • the determination module is configured to determine each frame of the video to be presented through the following process: determine the i-th frame of the video captured by the target shooting device from the i-th frame of the composite video , the target shooting device is the shooting device at the target position among the multiple shooting devices; and the i-th frame of the video shot by the target shooting device is determined as the i-th frame of the video to be presented.
  • the receiving module is further configured to receive a user input instruction, and the user input instruction indicates a target shooting device.
  • the determination module is configured to determine each frame of the video to be presented through the following process: determine the i-th frame of the video shot by the target shooting device from the i-th frame of the composite video; and determine the i-th frame of the video shot by the target shooting device The i-th frame is determined as the i-th frame of the video to be presented.
  • the receiving module is further configured to receive a user's look-around operation on the current frame of the video to be presented; and the presentation module is also configured to present the current frame of the video to be presented in response to the look-around operation The look-around image sequence corresponding to the frame.
  • the determination module is configured to: in response to the look-around operation, determine from the composite video the frame corresponding to the current frame of the video to be presented; The frame corresponding to the current frame is divided into multiple images respectively corresponding to the multiple shooting devices; and a surround view image sequence is obtained based on the multiple images.
  • the number of the plurality of images is equal to the number of the plurality of photographing devices.
  • the determining module is configured to: arrange the multiple images according to the sequence of positions of the multiple shooting devices to obtain a sequence of surround-view images.
  • the determination module is configured to: arrange the multiple images according to the order of the positions of the multiple shooting devices; and insert a middle frame between every two adjacent images of the multiple images through frame insertion frames to obtain a sequence of look-around images.
  • the i-th frame of the composite video is obtained by splicing the i-th frame respectively captured by multiple shooting devices at the same moment.
  • an apparatus for video presentation includes: a receiving module configured to receive videos captured by multiple shooting devices in a distributed shooting system; a determining module configured to obtain a composite video based on the videos captured by the multiple shooting devices respectively, and the composite video The i-th frame is obtained based on the i-th frame of the video captured by multiple shooting devices at the same time, where i is any positive integer; and the sending module is configured to send the composite video to the client device.
  • the determination module is configured to determine each frame of the composite video through the following process: splicing the i-th frame of the video captured by multiple shooting devices at the same time to obtain the composite The i-th frame of the video.
  • a presentation module is further included, configured to present the video captured by a specific capture device in the distributed capture system.
  • an electronic device in a fifth aspect, includes a transceiver, a processor, and a memory, and the memory stores instructions executed by the processor.
  • the electronic device realizes: receiving composite video from the central device via the transceiver, composite video The i-th frame of is obtained based on the i-th frame of the video captured by multiple shooting devices in the distributed shooting system at the same time, i is any positive integer; the video to be presented is determined based on the composite video, and the video to be presented and multiple associating with at least one of the shooting devices; and presenting the video to be presented.
  • the processor executes instructions to enable the electronic device to determine each frame of the video to be presented through the following process: determine the video captured by the target shooting device from the i-th frame of the composite video The i-th frame of the target shooting device is the shooting device at the target position among the multiple shooting devices; and the i-th frame of the video shot by the target shooting device is determined as the i-th frame of the video to be presented.
  • the processor executes instructions so that the electronic device realizes: receiving a user input instruction via a transceiver, the user input instruction indicating a target shooting device; and determining each of the video to be presented through the following process Frame: determining the i-th frame of the video shot by the target shooting device from the i-th frame of the composite video; and determining the i-th frame of the video shot by the target shooting device as the i-th frame of the video to be presented.
  • the processor executes instructions so that the electronic device realizes: receiving a user's look-around view operation on the current frame of the video to be presented; and in response to the look-around view operation, presenting The look-around image sequence of .
  • the processor executes instructions so that the electronic device realizes: in response to the look-around operation, determine from the composite video the frame corresponding to the current frame of the video to be presented; combine the determined composite video with the The frame corresponding to the current frame of the video to be presented is divided into a plurality of images respectively corresponding to the plurality of shooting devices; a surround-view image sequence is obtained based on the plurality of images; and the surround-view image sequence is presented.
  • the number of the plurality of images is equal to the number of the plurality of capture devices.
  • the processor executes instructions so that the electronic device implements: arranging multiple images according to the order of positions of the multiple shooting devices to obtain a sequence of surround-view images.
  • the processor executes instructions so that the electronic device realizes: arranging a plurality of images according to the order of positions of the plurality of shooting devices; Insert in-between frames to obtain a sequence of look-around images.
  • the i-th frame of the composite video is obtained by splicing the i-th frame of the video captured by multiple shooting devices at the same moment.
  • the electronic device includes a display screen for presenting the video to be presented or the surround view image sequence.
  • an electronic device in a sixth aspect, includes a transceiver, a processor, and a memory.
  • the memory stores instructions executed by the processor.
  • the electronic device realizes: receiving multiple photographs in a distributed photographing system via the transceiver.
  • the videos taken by the devices respectively; based on the videos taken by the multiple shooting devices, the composite video is obtained, and the i-th frame of the composite video is obtained based on the i-th frame of the videos shot by the multiple shooting devices at the same time, i is any positive integer; and sending the composite video to the client device via the transceiver.
  • the processor executes instructions to enable the electronic device to determine each frame of the composite video through the following process: splicing the i-th frame of the video captured by multiple shooting devices at the same time , to get the i-th frame of the composite video.
  • the processor executes instructions so that the electronic device implements: presenting the video captured by a specific capture device in the distributed capture system.
  • the electronic device includes a photographing device.
  • a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the above-mentioned first or second aspect or any implementation manner thereof is implemented. operation of the method described.
  • a chip or a chip system includes a processing circuit configured to perform the operations of the method according to the first aspect or the second aspect or any implementation thereof.
  • a computer program or computer program product is provided.
  • the computer program or computer program product is tangibly stored on a computer-readable medium and includes computer-executable instructions, which, when run on a computer, cause the computer to perform The operations of the methods described in the manner are implemented.
  • Figure 1 shows a schematic diagram of an example environment in which embodiments of the present disclosure may be implemented
  • Figure 2 shows a schematic diagram of an example scenario in which embodiments of the present disclosure may be implemented
  • Fig. 3 shows a schematic interaction diagram of a video rendering process according to some embodiments of the present disclosure
  • FIG. 4 shows a schematic flowchart of a process of determining a composite video according to some embodiments of the present disclosure
  • Fig. 5 shows a schematic diagram of a method of determining a composite video according to some embodiments of the present disclosure
  • Fig. 6 shows a schematic flowchart of a process of presenting a sequence of surround-view images according to some embodiments of the present disclosure
  • Fig. 7 shows a schematic diagram of frame insertion according to some embodiments of the present disclosure
  • Fig. 8 shows a schematic flowchart of a video rendering process according to some embodiments of the present disclosure
  • Fig. 9 shows a schematic flowchart of a video rendering process according to some embodiments of the present disclosure
  • Fig. 10 shows a schematic block diagram of an apparatus for video presentation according to some embodiments of the present disclosure
  • FIG. 11 shows another schematic block diagram of an apparatus for video presentation according to some embodiments of the present disclosure.
  • Figure 12 shows a schematic block diagram of an example device that may be used to implement embodiments of the present disclosure.
  • the distributed shooting system can include a shooting array composed of at least two shooting devices, the shooting devices can take images or videos, and the shooting devices can be called image acquisition devices, etc.
  • the distributed shooting system can also be called a distributed camera system or Distributed shooting array or distributed image acquisition system, etc.
  • image in the embodiments of the present disclosure may be an image captured by a capture device or may be a frame of a video captured by an image capture device.
  • video may also be referred to as image stream, frame stream, video stream, media stream, etc., which is not limited in the present disclosure.
  • At least two shooting devices in the distributed shooting system can shoot at the same time to obtain more visual information, and then can realize multi-channel image display on the same screen and real-time stitching of multiple images through mutual cooperation.
  • a distributed camera system can be implemented in various scenarios.
  • the distributed shooting device system can be implemented as a surround-view shooting array, and multiple shooting devices in the system can be arranged around the target object at a certain angle and distance, and each shooting device is responsible for shooting the target object within a certain field of view, so that When the images captured by each shooting device are played sequentially, it is as if the human eye takes the target object as the center and observes the target object from different angles along an arc in one direction.
  • the distributed shooting system has also been applied in the webcast.
  • multiple shooting devices of the distributed shooting system can shoot separately, and the anchor can choose the video to be presented so that customers can view it through the client device.
  • the content that the customer views is determined by the anchor, that is to say, the client device is only a passive receiver of the video, so that all customers see exactly the same live content from different clients.
  • the anchor does not switch the shooting device, it will cause waste of videos shot by other shooting devices in the distributed shooting system, and the videos shot by multiple shooting devices cannot be fully utilized. In this way, the experience and feelings of the user watching the live broadcast are seriously affected.
  • the present disclosure provides a video presentation solution.
  • the client device can receive the composite video obtained from the videos captured by multiple shooting devices, and then perform local presentation based on the composite video, which can satisfy different users' requirements for presentation. Different needs improve user experience.
  • FIG. 1 shows a schematic diagram of an example environment 100 in which embodiments of the present disclosure may be implemented.
  • a central device 110 a distributed camera system 120, a client device 130-1, a client device 130-2, . . . and a client device 130-N are shown.
  • the distributed shooting system 120 includes multiple shooting devices, such as the shooting device 122-1, the shooting device 122-2, and the shooting device 122-3 in FIG. 1 .
  • the shooting device 122-1, the shooting device 122-2, and the shooting device 122-3 are collectively referred to as the shooting device 122 in the embodiment of the present disclosure
  • the client device 130-1, client device 130-1, and Devices 130 - 2 , . . . and client devices 130 -N are collectively referred to as client devices 130
  • the distributed shooting system 120 includes three shooting devices, in practical applications, the number of shooting devices included in the distributed shooting system 120 can be set according to scenarios and the like.
  • the photographing device 122 may be an independent device or may be a peripheral device of other electronic devices, for example, the photographing device 122 may be implemented as an electronic device having an image acquisition function and the like.
  • the shooting device 122 may include a camera, a video camera, a capture device, a mobile phone, a tablet computer, a wearable device, etc., which is not limited in the present disclosure.
  • the central device 110 can interact with the client device 130.
  • the central device 110 can be an electronic device that interacts with the host, and the client device 130 can be an electronic device that interacts with the user.
  • Electronic equipment That is to say, there is a communication connection between the central device 110 and the client device 130 .
  • the central device 110 may interact with the client device 130 via a server (eg, a streaming server).
  • the central device 110 and the client device 130 may be implemented as electronic devices such as smart phones, tablet computers, wearable devices and the like.
  • Embodiments of the present disclosure do not limit the number (N) of client devices. For example, in a live network scenario, the number of client devices may be hundreds, thousands or even larger.
  • the central device 110 can interact with the distributed shooting system 120, for example, the images or videos captured by the shooting device 122-1, the shooting device 122-2 and the shooting device 122-3 can be transmitted to the central device 110.
  • Wired methods may include but not limited to optical fiber connections, Universal Serial Bus (Universal Serial Bus, USB) connections, etc.
  • Wireless methods may include but not limited to mobile communication technologies (including but not limited to 2G, 3G, 4G, 5G, 6G, etc., Wi-Fi, Bluetooth (Bluetooth), Point to Point (P2P), etc.
  • the central device 110 and the photographing device 122 can be in the same local area network environment, and the central device 110 can discover that they are located in the same local area network environment through its distributed photographing system control module (or connection discovery module or other modules, etc.). and the shooting device 122, and establish a Wi-Fi connection with the shooting device 122, for example, the central device 110 and the shooting device 122 can be connected to the same router.
  • the communication connection modes between the central device 110 and different shooting devices 122 may be the same or different.
  • the connection mode between the central device 110 and the shooting device 122-1 may be different from the connection mode between the central device 110 and the shooting device 122-2.
  • the central device 110 is a device independent of the distributed photographing system 120, in some embodiments, the central device 110 can be implemented as a part of the distributed photographing system 120, for example, the central device 110 can be a photographing device 122-2 corresponding electronic equipment.
  • Embodiments of the present disclosure do not limit the arrangement of each photographing device in the distributed photographing system 120 .
  • the photographing device 122-1, the photographing device 122-2, and the photographing device 122-3 may be arranged side by side, so that when photographing the target object, the photographing directions of the target object are parallel or substantially consistent.
  • the photographing device 122-1, the photographing device 122-2, and the photographing device 122-3 may be arranged around the target object, so that when photographing the target object, the photographing direction of the target object forms a certain angle.
  • the distributed shooting system 120 includes seven shooting devices, namely shooting devices 122 - 1 to 122 - 7 , and the electronic device corresponding to the shooting device 122 - 4 is the central device 110 .
  • seven photographing devices may be installed on a fixed bracket 201 , and each of the seven photographing devices may capture an image of a target object 202 .
  • the target object 202 may be an item to be displayed by the host.
  • the fixing bracket 201 is realized as a ring bracket, and a plurality of fixing buckles are arranged on the fixing bracket 201 for fixing a plurality of photographing devices 122 .
  • the position and angle of each shooting device 122 relative to the center of the fixed bracket 201 can be fixed.
  • the plurality of shooting devices 122 in FIG. 2 can shoot towards the center of the fixed bracket 201 , that is to say, the target object 202 is located near the center of the fixed bracket 201 .
  • the embodiments of the present disclosure are not limited thereto.
  • multiple shooting devices 122 may also shoot outside the fixed bracket 201 , so that the field of view can be expanded, so as to perform panoramic live broadcast.
  • each of the seven shooting devices in FIG. 2 can be implemented as a camera on a smart terminal.
  • the smart terminal is a mobile phone
  • the seven mobile phones can be installed on at the corresponding location.
  • seven shooting devices are installed on the fixed bracket 201 respectively, so that the shooting angles of the seven shooting devices to shoot the target object 202 are fixed, that is, each shooting device cannot be moved or rotated.
  • the included angle between the centerlines of every two adjacent shooting devices may be fixed, for example, the included angle may be set to 20° or other values.
  • the shooting areas of two adjacent shooting devices in the distributed shooting system 120 may partially overlap.
  • the shooting device 122-1 shoots the target object 202 to obtain a first image
  • the shooting device 122-2 shoots the target object 202 to obtain a second image
  • the first area in the first image and the second area in the second image are for are the same shooting area of the target object 202 .
  • the first area occupies 1/4 or more of the first image
  • the second area occupies 1/4 or more of the second image.
  • the embodiments of the present disclosure can be applied to the scene of network live broadcast.
  • the host can prepare and fix various shooting devices 122 in advance to form a distributed shooting system.
  • multiple photographing devices 122 are installed on the fixed bracket 201 .
  • the anchor can also select the central device 110 , for example, set the electronic device corresponding to the shooting device 122 - 4 as the central device 110 .
  • the anchor can create a live broadcast room through the central device 110, for example, connect to a server of the live broadcast platform through the central device 110 to create a live broadcast room.
  • the central device 110 may request a streaming address, such as a Uniform Resource Locator (URL), from the server of the live broadcast platform.
  • URL Uniform Resource Locator
  • Stream push can be a process in which the central device 110 pushes audio and video streams to the server of the live broadcast platform, and the stream push address is an address corresponding to the stream push process, and the format of the stream push address depends on the protocol used.
  • the client device 130 can obtain the corresponding audio and video stream from the server of the live broadcast platform through the pull stream address corresponding to the push stream address, where the pull stream can be that the client device 130 pulls the audio and video stream on the server of the live broadcast platform
  • the streaming address is the address corresponding to the streaming process, and the format of the streaming address depends on the protocol used. It can be understood that the number of client devices 130 connected to the central device 110 may change as customers enter or exit the live broadcast room.
  • the client device 130 can establish a communication connection with the central device 110 through the user's operation of entering the live broadcast room.
  • the central device 110 can send the live room information to the client device 130 .
  • the client device 130 may send an information request to the live broadcast platform server, and then obtain the live room information from the central device 110 .
  • the live room information may include system information of the distributed shooting system.
  • the system information of the distributed shooting system may include the number of shooting devices included in the distributed shooting system. For example, in the scenario shown in FIG. 2, the number is seven.
  • the system information of the distributed shooting system may include the size of images captured by each shooting device, such as width and height. For example, if the images captured by each capturing device have the same size, then w and h may be included to represent the width and height of the image captured by a single capturing device, respectively.
  • the system information of the distributed camera system may include the identification of the camera device associated with the central device 110 . For example, in the scenario shown in FIG.
  • the central device 110 is an electronic device corresponding to the photographing device 122 - 4 , and the identifier of the photographing device associated with the central device 110 may be 4 . It can be understood that the system information of the distributed shooting system may also include other information, such as the resolution of the shooting device, etc., which will not be listed here.
  • the live broadcast room information may also include a streaming address for the client device 130, and then the client device 130 may obtain the video through the streaming address. It can be understood that the information of the live broadcast room may also include other information, such as the address of the live broadcast room, the broadcast time of the live broadcast, etc., which will not be listed here.
  • FIG. 3 shows a schematic interaction diagram of a video rendering process 300 according to some embodiments of the present disclosure.
  • Process 300 shown in FIG. 3 involves central device 110 and client device 130 .
  • the central device 110 determines 310 a composite video based on the multiple videos respectively captured by the multiple capture devices 122 of the distributed capture system 120 .
  • the process of determining the composite video by the central device 110 may refer to FIG. 4 , which shows a schematic flowchart of a process 400 of determining the composite video by the central device 110 .
  • video capture 410 is performed by capture device 122 .
  • the shooting device 122-1 shoots to get video 1
  • the shooting device 122-2 shoots to get video 2
  • the shooting device 122-3 shoots to get video 3.
  • the central device 110 and the shooting device 122 may perform time synchronization 402 .
  • Embodiments of the present disclosure do not limit the specific manner of time synchronization.
  • time synchronization information between the local clock of the central device 110 and the local clock of the photographing device 122 can be determined.
  • different capturing devices 122 can capture images at the same time. For example, the i-th frame captured by the capturing device 122-1 and the i-th frame captured by the capturing device 122-2 are acquired simultaneously.
  • the shooting device 122 sends 420 the captured video to the central device 110 .
  • the central device 110 may acquire video 1, video 2 and video 3.
  • the central device 110 may perform preprocessing 422 on the video from the shooting device 122 .
  • pre-processing may be for some or all of the video.
  • preprocessing may include, but is not limited to: beautification, watermarking, mosaicing, etc.
  • watermarking may include adding all or part of the following information to some or all frames of the video: anchor name, information of the target object, identification of the shooting device 122, and the like.
  • central device 110 obtains 430 composite video. Taking three shooting devices as an example, the central device 110 can synthesize video 1, video 2 and video 3 to obtain a composite video.
  • the i-th frame of the composite video may be obtained based on the i-th frame captured by the capturing device 122 .
  • the i-th frame captured by each shooting device 122 may be combined to obtain the i-th frame of the composite video.
  • i is any positive integer, so that by obtaining each frame of the composite video, the composite video can be obtained.
  • the i-th frame captured by each shooting device 122 may be spliced to obtain the i-th frame of the composite video.
  • the i-th frame captured by each capturing device 122 has the same size, for example, the width is w and the height is h.
  • the width of the i-th frame of the composite video can be equal to the sum of the widths of the i-th frames captured by each shooting device 122, and the height of the i-th frame of the composite video is equal to h.
  • the order of the photographing devices 122 may be: taking the target object 202 as a reference, the order of the photographing devices arranged clockwise.
  • the clockwise order of the photographing devices 122 is: photographing device 122-1, photographing device 122-2, photographing device 122-3, photographing device 122-4, photographing device 122- 5.
  • each frame in the video 1 captured by the shooting device 122-1 is represented as f11, f12, ..., f1n
  • each frame in the video 2 captured by the shooting device 122-2 is represented as f21, f22, ... , f2n
  • the first frame of the composite video can be formed by sequentially splicing f11, f21 and f31, ..., and the nth frame of the composite video can be formed by sequentially splicing f1n, f2n and f3n.
  • the central device 110 can take the first frames from the corresponding videos in the order of the shooting device 122 from the multiple videos, and after the first frames of all the videos are taken, the multiple first frames are sorted according to The sequences of the shooting devices 122 are spliced as the first frame of the composite video. Then the central device 110 can take the second frames from the corresponding videos in the order of the shooting device 122 from the plurality of videos, and after taking the second frames of all the videos, the multiple second frames can be taken according to the sequence of the shooting device 122 The order of splicing, as the second frame of the composite video. By looping like this, a composite video can be obtained.
  • the above embodiments are only illustrative of the way to obtain the composite video, which is not limited by the embodiments of the present disclosure.
  • they may be synthesized according to the reverse order of the photographing devices 122, or may be synthesized according to other predetermined rules.
  • splicing can be performed along the width direction or along the height direction, or can also be spliced in other ways.
  • central device 110 sends 320 composite video to client device 130 .
  • the client device 130 can obtain the composite video via the streaming server by pulling the streaming address.
  • the central device 110 may send the composite video to the client device 130 after coding, compressing, encapsulating, and so on.
  • video compression technologies such as H.264 may be used for encoding and compression.
  • the video may be encapsulated into a streaming media format such as FLV or TS. In this way, the demand for network bandwidth and the like can be reduced, the transmission rate can be increased, and real-time performance can be guaranteed.
  • the client device 130 may determine the composite video through operations such as decapsulation, decoding and decompression.
  • the client device 130 can obtain the encapsulated video data in FLV or TS format by pulling the stream from the live broadcast platform server, and then obtain encoded and compressed video data through parsing and the like. Further, the client device 130 can perform a decoding operation to restore the composite video.
  • client device 130 determines 330 a video to present, wherein the video to present is associated with at least one capture device 122 . Further, the client device 130 presents 340 the video to be presented.
  • presenting a video may refer to displaying video frames frame by frame, or may be understood as playing a video.
  • the video to be presented may be composite video. Since one frame of the composite video includes images captured by various capture devices in the distributed capture system, the client device 130 can simultaneously present images about the target object 202 captured by multiple capture devices. In this way, the user at the client device 130 can see the images of the target object 202 from various angles at the same time, which can facilitate the user to make subsequent selections, for example, which shooting device to view the video for.
  • the video to be presented may be a video shot by a specific shooting device in the distributed shooting system.
  • the part shot by a specific shooting device may be separated from each frame of the composite video, so as to determine the video to be presented.
  • the specific shooting device may be any of the following: a shooting device located at the target location, a shooting device designated by the user of the client device 130 , and the like.
  • the target location may be the central device 110
  • the video to be presented may be a video shot by a shooting device corresponding to the central device 110
  • the photographing device corresponding to the central device 110 may be a photographing device included in the central device 110 .
  • the photographing device corresponding to the central device 110 may be referred to as a central photographing device.
  • the client device 130 can separate the central shooting device from the composite video based on the number of multiple shooting devices in the distributed shooting system (assumed to be M) and the identity of the central shooting device (assumed to be p). The corresponding video to be presented.
  • the composite video is determined in a splicing manner similar to that shown in FIG. 5 . Then, for any frame of the composite video (assumed to be the i-th frame), the part whose width is [(p-1) ⁇ w, p ⁇ w] can be intercepted from the i-th frame of the composite video, as the video to be presented frame i.
  • the i-th frame of the composite video can be split into M images, that is, the images captured by M shooting devices can be restored, and then the one shot by the central shooting device can be determined from the M images image.
  • the central device 110 is generally an electronic device operated by the host, in this way, the presentation video at the client device 130 can be consistent with the content on the central device 110 interacted by the host. Especially when the host makes a speech introduction on the target object 202, the user can check the specific details introduced by the host in time.
  • the target location may be an intermediate location of multiple shooting devices
  • the video to be presented may be a video shot by a shooting device at an intermediate location in a distributed shooting system.
  • M the number of multiple shooting devices in the distributed shooting system
  • M the number of the shooting device at the middle position
  • M the number of the photographing device at the middle position
  • the image of the front face of the target object 202 can be viewed by the user at the client device 130 , so that the user can view more details of the target object 202 .
  • the video to be presented may be a video shot by a user-specified shooting device.
  • the client device 130 may receive an input instruction from the user, and the input instruction may indicate which shooting device among the multiple shooting devices.
  • the user may input a shooting device number, such as "2", so that the client device 130 can obtain the input instruction.
  • a shooting device number such as "2"
  • the client device 130 is playing a video shot by a certain shooting device (assume that the number is n1)
  • the user can swipe left or right to determine the number of the shooting device. For example, sliding to the left indicates that the shooting device number is reduced by one, that is, the designated shooting device number is n1-1. For example, sliding right indicates that the shooting device number is increased by one, that is, the designated shooting device number is n1+1.
  • the user at the client device 130 can determine which camera's video is to be presented by inputting instructions, which realizes the switching of the video presented on the client device 130 and improves the user's convenience. Autonomy, better able to meet the needs of customers. In this way, the video presented on the client device 130 does not need to be consistent with the video on the anchor’s electronic device, and it is no longer a passive video receiver. On the contrary, the user can independently select the video of interest without affecting the anchor’s electronic device. devices and other client devices 130 .
  • the embodiment of the present disclosure does not limit the video presented on the central device 110, for example, it may be a video captured by the central shooting device.
  • the client device 130 may receive 350 a user's look-around operation. Further, the client device 130 may present a sequence of 360 surround view images.
  • the user can click a specific area on the interface of the client device 130 to perform a look-around operation, for example, click a "look around" button on the specific area.
  • the user can operate a specific gesture on the interface of the client device 130 to perform a look-around operation, for example, the specific gesture is drawing a circle or a semi-arc.
  • FIG. 6 shows a schematic flowchart of a process 600 of presenting a sequence of surround-view images according to some embodiments of the present disclosure.
  • the client device 130 determines a current frame of the composite video corresponding to the current frame of the video to be presented in response to the look-around viewing operation.
  • the tth frame of the composite video can be obtained. It can be understood that the tth frame of the composite video includes the images taken by each shooting device in the distributed shooting system image.
  • the client device 130 splits the current frame of the composite video into multiple images.
  • splitting may be performed based on the number of multiple shooting devices, that is to say, the number of multiple images is equal to the number of multiple shooting devices.
  • the multiple images are images captured by multiple capturing devices respectively.
  • the client device 130 obtains a sequence of surround view images based on the plurality of images.
  • the multiple images may be sorted according to the position of the shooting device to obtain a sequence of surround-view images. That is to say, multiple images may be arranged sequentially according to the sequence of positions of multiple shooting devices to obtain a sequence of surround-view images. For example, in the scene shown in FIG. 2 , the image captured by the shooting device 122 - i is located at the ith position of the look-around image sequence, and i is any value from 1 to 7.
  • the multiple images may be arranged sequentially according to the position sequence of the multiple shooting devices, and at least one frame may be inserted between every two adjacent images to form a look-around image sequence. It can be understood that, in this embodiment, the number of images in the look-around image sequence is greater than the number of shooting devices.
  • At least one frame may be inserted between two adjacent images through a frame insertion operation.
  • the inserted at least one frame may be referred to as a virtual frame or an intermediate frame, and embodiments of the present disclosure do not limit the manner of frame insertion.
  • Frame interpolation can also be called supplementary frame or animation supplementary frame, and virtual frames can be obtained through algorithms such as local interpolation. In this way, through the frame insertion process, a virtual frame is inserted between two adjacent images, which can ensure the continuity of image changes between the two adjacent images.
  • the number of multiple shooting devices is m
  • the multiple images are images captured by the multiple shooting devices, denoted as f1t, f2t, f3t, . . . , fmt.
  • four virtual frames may be inserted between every two images, so as to obtain a look-around image sequence 700 .
  • the number of images included in the surround-view image sequence obtained after frame interpolation is: m+4 ⁇ (m-1).
  • the number of images in the look-around image sequence can be expanded, so that the look-around image sequence can be presented more smoothly later.
  • the number of virtual frames to be inserted during the frame insertion process there is no limitation on the number of virtual frames to be inserted during the frame insertion process.
  • the number of virtual frames inserted between every two adjacent frames may be a preset value, for example, the preset value is 4 in FIG. 7 , and it can be understood that the preset value may also be other values.
  • the preset value can be preset according to the angle between two adjacent shooting devices, the number of shooting devices, and the like.
  • the number of virtual frames inserted between different adjacent two frames may be equal or unequal, for example, the virtual frames inserted between f1t and f2t have a first number, and the virtual frames inserted between f2t and f3t The virtual frames have a second number, and the first number may or may not be equal to the second number.
  • the client device 130 sequentially presents each image in the sequence of look-around images.
  • the client device 130 may also present corresponding images in the look-around image sequence based on the user's left and right sliding operations. In this way, the user can view the look-around effect of the target object 202 according to his own needs.
  • the embodiments of the present disclosure provide a real-time surround view live broadcast solution based on a distributed shooting system.
  • the client device can receive composite video from the central device, and then the client device can present the video to be presented or the video to be presented according to actual needs or user instructions. Surround effect.
  • the user at the client device can independently determine the presented content, and the client device is no longer a single passive content receiver, which can greatly improve the user's interactive experience.
  • FIG. 8 shows a schematic flowchart of a video rendering process 800 according to some embodiments of the present disclosure.
  • Process 800 may be performed by client device 130 as shown in FIG. 1 .
  • the client device 130 receives the composite video from the central device, the i-th frame of the composite video is obtained based on the i-th frame of the video captured by multiple shooting devices in the distributed shooting system at the same time, i is Any positive integer.
  • the client device 130 determines a video to present based on the composite video, the video to present being associated with at least one capture device of the plurality of capture devices.
  • client device 130 presents the video to be presented.
  • the i-th frame of the composite video is obtained by splicing the i-th frames captured by multiple shooting devices at the same time. For example, at the same moment, multiple shooting devices capture corresponding multi-frames, then the central device may splice the corresponding multi-frames, and use the spliced frames as the composite video corresponding to the moment. frame.
  • the client device 130 determining the video to be presented based on the composite video may include determining each frame of the video to be presented through the following process: the client device 130 determines from the ith frame of the composite video The i-th frame of the video shot by the device, the target shooting device is the shooting device at the target position among the multiple shooting devices; and the i-th frame of the video shot by the target shooting device is determined as the i-th frame of the video to be presented.
  • the target location may be an intermediate location of multiple shooting devices.
  • the client device 130 determining the video to be presented based on the composite video includes: the client device 130 receives a user input instruction, and the user input instruction indicates the target shooting device; Frame: determining the i-th frame of the video shot by the target shooting device from the i-th frame of the composite video; and determining the i-th frame of the video to be presented from the i-th frame of the video shot by the target shooting device.
  • each frame of the video to be presented can be obtained frame by frame by sequentially setting i as 1, 2, . . .
  • the user's input instruction may be based on the user's sliding operation on the interface, and the sliding operation may be left sliding or right sliding to indicate that the target shooting device is determined by moving the position left or right respectively.
  • the i-th frame of the video captured by the target shooting device may be determined from the i-th frame of the composite video as the i-th frame of the video to be presented.
  • the image size of the i-th frame of the video to be presented is smaller than the image size of the i-th frame of the composite video.
  • the client device 130 receives a user's look-around operation on the current frame of the video to be presented.
  • client device 130 presents a sequence of look-around images corresponding to a current frame of video to be presented in response to a look-around viewing operation.
  • the client device 130 presenting the look-around image sequence in response to the look-around viewing operation may include: the client device 130 determining from the composite video a frame corresponding to the current frame of the video to be presented in response to the look-around viewing operation; The determined composite video frame corresponding to the current frame of the video to be presented is divided into multiple images respectively corresponding to multiple shooting devices; obtaining a surround view image sequence based on the multiple images; and presenting the surround view image sequence.
  • the number of multiple images is equal to the number of multiple shooting devices.
  • the client device 130 obtaining the sequence of surround-view images based on the multiple images may include: the client device 130 arranging the multiple images according to the order of positions of the multiple shooting devices to obtain the sequence of surround-view images.
  • the client device 130 obtaining the surround-view image sequence based on the multiple images includes: the client device 130 arranges the multiple images according to the order of the positions of the multiple shooting devices; Intermediate frames are inserted between adjacent images to obtain a sequence of look-around images.
  • FIG. 9 shows a schematic flowchart of a video rendering process 900 according to some embodiments of the present disclosure.
  • the process 900 may be executed by the central device 110 as shown in FIG. 1 .
  • the central device 110 receives the videos captured by each of the multiple capture devices in the distributed capture system.
  • the central device 110 obtains a composite video based on the video captured by each of the multiple shooting devices, and the i-th frame of the composite video is obtained based on the i-th frame of the video captured by the multiple shooting devices at the same time, i is any positive integer.
  • the central device 110 sends the composite video to the client devices.
  • obtaining the composite video by the central device 110 may include determining each frame of the composite video through the following process: the central device 110 splices the i-th frame of the video captured by multiple shooting devices at the same time, to Get the i-th frame of the composite video. For example, at the same moment, multiple shooting devices capture corresponding multi-frames respectively, then the central device 110 can splice the corresponding multi-frames, and use the spliced frames as the frames corresponding to the moment in the composite video .
  • it may further include: presenting at the central device 110 a video captured by a specific capture device in the distributed capture system.
  • the central device 110 presents the video captured by the target capture device after acquiring the videos captured by the multiple capture devices in the distributed capture system.
  • the target photographing device may be a device interacting with the user in the distributed photographing system, or the target photographing device may be a device at an intermediate position in the distributed photographing system.
  • Fig. 10 shows a schematic block diagram of an apparatus 1000 for video presentation according to some embodiments of the present disclosure.
  • the apparatus 1000 may be implemented as or included in the client device 130 of FIG. 1 .
  • Apparatus 1000 may include a plurality of modules for performing corresponding steps in process 800 as discussed in FIG. 8 .
  • the device 1000 includes a receiving module 1010 , a determining module 1020 and a presenting module 1030 .
  • the receiving module 1010 is configured to receive the composite video from the central device, the i-th frame of the composite video is obtained based on the i-th frame of the video captured by multiple shooting devices in the distributed shooting system at the same time, and i is any positive integer .
  • the determining module 1020 is configured to determine a video to be presented based on the composite video, and the video to be presented is associated with at least one capturing device among the plurality of capturing devices.
  • the presentation module 1030 is configured to present the video to be presented.
  • the i-th frame of the composite video is obtained by splicing the i-th frames captured by multiple shooting devices at the same time.
  • the determination module 1020 may be configured to determine each frame of the video to be presented through the following process: determine the i-th frame of the video captured by the target shooting device from the i-th frame of the composite video, the target The shooting device is the shooting device located at the target position among the multiple shooting devices; and the i-th frame of the video captured by the target shooting device is determined to be the i-th frame of the video to be presented.
  • the receiving module 1010 may also be configured to receive a user input instruction, where the user input instruction indicates the target shooting device.
  • the determining module 1020 may be configured to determine each frame of the video to be presented through the following process: determine the i-th frame of the video shot by the target shooting device from the i-th frame of the composite video; The i-th frame of the video determines the i-th frame of the video to be rendered.
  • the receiving module 1010 may also be configured to receive a user's look-around operation on the current frame of the video to be presented.
  • the presenting module 1030 may also be configured to present a surround view image sequence corresponding to the current frame of the video to be presented in response to the surround view operation.
  • the determining module 1020 may be configured to: in response to the look-around operation, determine from the composite video a frame corresponding to the current frame of the video to be presented; split the determined frame of the composite video into multiple taking a plurality of images respectively corresponding to the devices; and obtaining a look-around image sequence based on the plurality of images.
  • the number of the plurality of images is equal to the number of the plurality of photographing devices.
  • the determining module 1020 may be configured to: arrange the multiple images according to the sequence of positions of the multiple shooting devices to obtain a sequence of surround-view images.
  • the determination module 1020 may be configured to: arrange the multiple images according to the position order of the multiple shooting devices; and insert an intermediate frame between every two adjacent images of the multiple images through a frame insertion operation, to obtain a look-around image sequence.
  • the number of images in the look-around image sequence is greater than the number of multiple shooting devices.
  • the apparatus 1000 in FIG. 10 may be implemented as the client device 130, or may be implemented as a chip or chip system in the client device 130, which is not limited by the embodiment of the present disclosure.
  • the apparatus 1000 in FIG. 10 can be used to implement the processes described above in conjunction with the client device 130 in FIG. 3 to FIG. 9 , and details are not repeated here for brevity.
  • Fig. 11 shows another schematic block diagram of an apparatus 1100 for video presentation according to some embodiments of the present disclosure.
  • the apparatus 1100 may be implemented as or included in the central device 110 in FIG. 1 .
  • Apparatus 1100 may include a plurality of modules for performing corresponding steps in process 900 as discussed in FIG. 9 .
  • the device 1100 includes a receiving module 1110 , a determining module 1120 and a sending module 1130 .
  • the receiving module 1110 is configured to receive videos captured by multiple capturing devices in the distributed capturing system.
  • the determining module 1120 is configured to obtain a composite video based on the video captured by each of the multiple shooting devices, the i-th frame of the composite video is obtained based on the i-th frame of the video captured by the multiple shooting devices at the same time, i is any positive integer.
  • the sending module 1130 is configured to send the composite video to the client device.
  • the determination module 1120 may be configured to determine each frame of the composite video through the following process: "splicing the i-th frame of the video captured by multiple shooting devices at the same time to obtain the composite video frame i.
  • the apparatus 1100 may further include a presentation module configured to present videos captured by specific capture devices in the distributed capture system.
  • the apparatus 1100 in FIG. 11 may be implemented as the central device 110 , or may be implemented as a chip or chip system in the central device 110 , which is not limited by the embodiments of the present disclosure.
  • the apparatus 1100 in FIG. 11 can be used to implement the processes described above in conjunction with the central device 110 in FIG. 3 to FIG. 9 , and for the sake of brevity, details are not repeated here.
  • FIG. 12 shows a schematic block diagram of an example device 1200 that may be used to implement embodiments of the present disclosure.
  • the device 1200 may be implemented as or included in the client device 130 of FIG. 1 , or the device 1200 may be implemented as or included in the central device 110 of FIG. 1 .
  • the device 1200 includes a central processing unit (Central Processing Unit, CPU) 1201, a read-only memory (Read-Only Memory, ROM) 1202, and a random access memory (Random Access Memory, RAM) 1203.
  • the CPU 1201 can perform various appropriate actions and processes according to computer program instructions stored in the RAM 1202 and/or RAM 1203 or loaded from the storage unit 1208 into the ROM 1202 and/or RAM 1203.
  • various programs and data required for the operation of the device 1200 can also be stored.
  • the CPU 1201 and the ROM 1202 and/or RAM 1203 are connected to each other via a bus 1204.
  • An input/output (I/O) interface 1205 is also connected to the bus 1204 .
  • the I/O interface 1205 includes: an input unit 1206, such as a keyboard, a mouse, etc.; an output unit 1207, such as various types of displays, speakers, etc.; a storage unit 1208, such as a magnetic disk, an optical disk, etc. ; and a communication unit 1209, such as a network card, a modem, a wireless communication transceiver, and the like.
  • the communication unit 1209 allows the device 1200 to exchange information/data with other devices through a computer network such as the Internet and/or various telecommunication networks.
  • CPU 1201 may be various general and/or special purpose processing components having processing and computing capabilities. Some examples that can be implemented as include, but are not limited to, Graphics Processing Unit (Graphics Processing Unit, GPU), various dedicated artificial intelligence (Artificial Intelligence, AI) computing chips, various computing units that run machine learning model algorithms, digital signal A processor (Digital Signal Processor, DSP), and any suitable processor, controller, microcontroller, etc., may accordingly be referred to as a computing unit.
  • the CPU 1201 executes the various methods and processes described above, such as the process 800 or 900.
  • process 800 or 900 may be implemented as a computer software program tangibly embodied on a computer-readable medium, such as storage unit 1208 .
  • part or all of the computer program may be loaded and/or installed on the device 1200 via the ROM 1202 and/or RAM 1203 and/or the communication unit 1209.
  • a computer program is loaded into ROM 1202 and/or RAM 1203 and executed by CPU 1201, one or more steps of process 800 or 900 described above may be performed.
  • the CPU 1201 may be configured to execute the process 800 or 900 in any other suitable manner (eg, by means of firmware).
  • the device 1200 in FIG. 12 may be implemented as an electronic device (such as the client device 130 or the central device 110), or may be implemented as a chip or a chip system in an electronic device, and embodiments of the present disclosure do not limited.
  • Embodiments of the present disclosure also provide a chip, which may include an input interface, an output interface, and a processing circuit.
  • a chip which may include an input interface, an output interface, and a processing circuit.
  • the above signaling or data interaction may be completed by the input interface and the output interface, and the generation and processing of the signaling or data information may be completed by the processing circuit.
  • Embodiments of the present disclosure also provide a chip system, including a processor, configured to support the client device 130 or the central device 110 to implement the functions involved in any of the foregoing embodiments.
  • the system-on-a-chip may further include a memory for storing necessary program instructions and data, and when the processor runs the program instructions, the device installed with the system-on-a-chip can implement the program described in any of the above-mentioned embodiments.
  • the system-on-a-chip may consist of chips, or may include chips and other discrete devices.
  • Embodiments of the present disclosure also provide a processor, configured to be coupled with a memory, the memory stores instructions, and when the processor executes the instructions, the processor executes any of the above-mentioned embodiments involving the client device 130 or the center. Methods and Functions of Device 110 .
  • Embodiments of the present disclosure also provide a computer program product containing instructions, which, when run on a computer, cause the computer to execute the methods and functions related to the client device 130 or the central device 110 in any of the above-mentioned embodiments .
  • Embodiments of the present disclosure also provide a computer-readable storage medium on which computer instructions are stored, and when the processor executes the instructions, the processor executes any of the above-mentioned embodiments involving the client device 130 or the central device. 110 methods and functions.
  • the various embodiments of the present disclosure may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software, which may be executed by a controller, microprocessor or other computing device. While various aspects of the embodiments of the present disclosure are shown and described as block diagrams, flowcharts, or using some other pictorial representation, it should be understood that the blocks, devices, systems, techniques or methods described herein can be implemented as, without limitation, Exemplary, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controllers or other computing devices, or some combination thereof.
  • the present disclosure also provides at least one computer program product tangibly stored on a non-transitory computer-readable storage medium.
  • the computer program product comprises computer-executable instructions, eg included in program modules, which are executed in a device on a real or virtual processor of a target to perform the process/method as above with reference to the accompanying drawings.
  • program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • the functionality of the program modules may be combined or divided as desired among the program modules.
  • Machine-executable instructions for program modules may be executed within local or distributed devices. In a distributed device, program modules may be located in both local and remote storage media.
  • Computer program codes for implementing the methods of the present disclosure may be written in one or more programming languages. These computer program codes can be provided to processors of general-purpose computers, special-purpose computers, or other programmable data processing devices, so that when the program codes are executed by the computer or other programmable data processing devices, The functions/operations specified in are implemented.
  • the program code may execute entirely on the computer, partly on the computer, as a stand-alone software package, partly on the computer and partly on a remote computer or entirely on the remote computer or server.
  • computer program code or related data may be carried by any suitable carrier to enable a device, apparatus or processor to perform the various processes and operations described above.
  • carriers include signals, computer readable media, and the like.
  • signals may include electrical, optical, radio, sound, or other forms of propagated signals, such as carrier waves, infrared signals, and the like.
  • a computer readable medium may be any tangible medium that contains or stores a program for or related to an instruction execution system, apparatus, or device.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination thereof. More detailed examples of computer-readable storage media include electrical connections with one or more wires, portable computer diskettes, hard disks, random storage access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash), optical storage, magnetic storage, or any suitable combination thereof.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

Embodiments of the present disclosure provide a video presentation method, an electronic device, a computer storage medium, and a program product. The method comprises: a client device receives a composite video from a central device, wherein an i-th frame of the composite video is obtained on the basis of i-th frames of videos respectively captured at the same moment by a plurality of image capture devices in a distributed image capture system, and i is any positive integer; on the basis of the composite video, the client device determines a video to be presented, said video being associated with at least one image capture device among the plurality of image capture devices; and the client device presents the video. In the foregoing manner, on the basis of the composite video from the central device, the client device may determine a video to be presented for presentation, so that the presentation at the client device is more flexible and diverse, thereby improving the user experience.

Description

视频呈现方法、电子设备、计算机存储介质和程序产品Video presentation method, electronic device, computer storage medium and program product 技术领域technical field
本公开的实施例涉及多媒体处理领域,更具体地,涉及视频呈现方法、电子设备、计算机存储介质和程序产品。Embodiments of the present disclosure relate to the field of multimedia processing, and more specifically, to a video presentation method, electronic equipment, computer storage media, and program products.
背景技术Background technique
多个不同设备之间进行协同互联的应用场景越来越多,例如在多个拍摄设备协同的场景中,可以实现多路同屏显示、多路图像实时拼接等,从而给用户带来与单设备不同的场景体验。这种多个拍摄设备组成拍摄阵列的场景也可以被称为分布式拍摄系统。There are more and more application scenarios for collaborative interconnection between multiple different devices. For example, in the scenario where multiple shooting devices are coordinated, multiple channels can be displayed on the same screen, and multiple channels of images can be spliced in real time. Experience with different scenarios on devices. Such a scene where multiple shooting devices form a shooting array may also be called a distributed shooting system.
分布式拍摄系统所拍摄的图像可以在客户端上呈现,但是目前的呈现方式比较单一,导致用户体验较差。Images captured by the distributed capture system can be presented on the client, but the current presentation method is relatively simple, resulting in poor user experience.
发明内容Contents of the invention
本公开的实施例提供了一种在客户端设备上基于来自中心设备的复合视频来呈现待呈现视频的方案。Embodiments of the present disclosure provide a solution for presenting a video to be presented on a client device based on a composite video from a central device.
第一方面,提供了一种视频呈现方法。该方法包括:客户端设备从中心设备接收复合视频,复合视频的第i帧是基于分布式拍摄系统中多个拍摄设备在同一时刻各自拍摄的视频的第i帧而获得的,i为任意正整数;客户端设备基于复合视频确定待呈现视频,该待呈现视频与多个拍摄设备中的至少一个拍摄设备相关联;以及客户端设备呈现该待呈现视频。In a first aspect, a video presentation method is provided. The method includes: the client device receives the composite video from the central device, the i-th frame of the composite video is obtained based on the i-th frame of the video captured by multiple shooting devices in the distributed shooting system at the same time, and i is any positive An integer; the client device determines a video to be presented based on the composite video, the video to be presented is associated with at least one of the plurality of capture devices; and the client device renders the video to be presented.
如此,客户端设备可以基于来自中心设备的复合视频来确定用于呈现的待呈现视频,这样待呈现视频不再是被动接收的,而是由客户端设备确定的,使得在客户端设备处的呈现更加灵活多样,从而提升了用户体验。In this way, the client device can determine the video to be presented for rendering based on the composite video from the central device, so that the video to be presented is no longer passively received, but determined by the client device, so that the video at the client device The presentation is more flexible and diverse, thereby enhancing the user experience.
在第一方面的一些实施例中,在客户端设备从中心设备接收复合视频之前还包括:中心设备与多个拍摄设备分别建立连接。在一些实施例中,中心设备与多个拍摄设备中每个拍摄设备分别建立无线连接,中心设备和多个拍摄设备处于同一局域网环境中。In some embodiments of the first aspect, before the client device receives the composite video from the central device, the method further includes: establishing connections between the central device and multiple shooting devices respectively. In some embodiments, the central device establishes a wireless connection with each of the multiple shooting devices, and the central device and the multiple shooting devices are in the same local area network environment.
在第一方面的一些实施例中,客户端设备基于复合视频确定待呈现视频包括通过下述过程确定所述待呈现视频的每一帧:客户端设备从复合视频的第i帧中确定由目标拍摄设备拍摄的视频的第i帧,目标拍摄设备为多个拍摄设备中位于目标位置处的拍摄设备;以及客户端设备将目标拍摄设备拍摄的视频的第i帧确定为待呈现视频的第i帧。In some embodiments of the first aspect, the client device determining the video to be presented based on the composite video includes determining each frame of the video to be presented through the following process: the client device determines from the ith frame of the composite video The i-th frame of the video shot by the shooting device, the target shooting device is the shooting device at the target position among the multiple shooting devices; and the client device determines the i-th frame of the video shot by the target shooting device as the i-th frame of the video to be presented frame.
如此,客户端设备可以基于复合视频确定目标拍摄设备拍摄的视频并进行呈现,这样能够简化用户操作。In this way, the client device can determine and present the video shot by the target shooting device based on the composite video, which can simplify user operations.
在第一方面的一些实施例中,客户端设备基于复合视频确定待呈现视频包括:客户端设备接收用户输入指令,用户输入指令指示目标拍摄设备;客户端设备通过下述过程确定所述待呈现视频的每一帧:从复合视频的第i帧中确定由目标拍摄设备拍摄的视频的第i帧;以及将目标拍摄设备拍摄的视频的第i帧确定为待呈现视频的第i帧。In some embodiments of the first aspect, the client device determining the video to be presented based on the composite video includes: the client device receives a user input instruction, and the user input instruction indicates a target shooting device; the client device determines the video to be presented through the following process For each frame of the video: determine the i-th frame of the video captured by the target shooting device from the i-th frame of the composite video; and determine the i-th frame of the video captured by the target shooting device as the i-th frame of the video to be presented.
如此,客户端设备可以基于用户输入指令,基于复合视频确定与用户输入指令对应的目标拍摄设备拍摄的视频并进行呈现,这样能够使得用户能够根据需要查看感兴趣的视频,从而在拍摄设备处的呈现可以更加多样化,提升了用户体验。In this way, the client device can determine and present the video captured by the target shooting device corresponding to the user input instruction based on the composite video based on the user input instruction, so that the user can view the video of interest as needed, so that the video at the shooting device The presentation can be more diverse, which improves the user experience.
在第一方面的一些实施例中,还包括:客户端设备接收用户针对待呈现视频的当前帧的环视查看操作;以及响应于该环视查看操作,客户端设备呈现与待呈现视频的当前帧对应的环视图像序列。In some embodiments of the first aspect, the method further includes: the client device receives a user's look-around operation for the current frame of the video to be presented; and in response to the look-around operation, the client device presents an The look-around image sequence of .
如此,客户端设备可以基于用户的环视查看操作来呈现环视图像序列,使得用户能够更直观地查看环视效果,这种多样化的呈现方式能够提升用户体验。In this way, the client device can present a look-around image sequence based on the user's look-around viewing operation, so that the user can view the look-around effect more intuitively. Such a variety of presentation methods can improve user experience.
在第一方面的一些实施例中,客户端设备呈现环视图像序列包括:客户端设备响应于该环视查看操作,从复合视频中确定与待呈现视频的当前帧对应的帧;客户端设备将所确定的复合视频的与待呈现视频的当前帧对应的帧拆分为与多个拍摄设备分别对应的多个图像;基于多个图像获得环视图像序列;以及客户端设备呈现该环视图像序列。In some embodiments of the first aspect, the client device presenting the look-around image sequence includes: the client device responds to the look-around viewing operation, determining from the composite video a frame corresponding to the current frame of the video to be presented; The determined frame of the composite video corresponding to the current frame of the video to be presented is divided into multiple images respectively corresponding to multiple shooting devices; a surround-view image sequence is obtained based on the multiple images; and the client device presents the surround-view image sequence.
如此,通过多个拍摄设备所拍摄的多个图像来获得环视图像序列,能够充分地利用分布式拍摄系统中各个拍摄设备,实现资源利用的最大化。In this way, the look-around image sequence is obtained by using multiple images captured by multiple capture devices, which can make full use of each capture device in the distributed capture system and maximize resource utilization.
在第一方面的一些实施例中,多个图像的数目等于多个拍摄设备的数目。In some embodiments of the first aspect, the number of the plurality of images is equal to the number of the plurality of photographing devices.
在第一方面的一些实施例中,客户端设备基于多个图像获得环视图像序列包括:客户端设备按照多个拍摄设备的位置顺序,排列多个图像以获得环视图像序列。In some embodiments of the first aspect, the client device obtaining the sequence of surround-view images based on the multiple images includes: the client device arranges the multiple images according to the sequence of positions of the multiple shooting devices to obtain the sequence of surround-view images.
如此,按照多个拍摄设备的位置获得环视图像序列,能够确保环视图像序列的呈现效果,避免出现错误。In this way, obtaining the look-around image sequence according to the positions of multiple shooting devices can ensure the presentation effect of the look-around image sequence and avoid errors.
在第一方面的一些实施例中,客户端设备基于多个图像获得环视图像序列包括:客户端设备按照多个拍摄设备的位置顺序,排列多个图像;以及客户端设备通过插帧操作在多个图像的每两个相邻图像之间插入中间帧,以获得环视图像序列。In some embodiments of the first aspect, the client device obtaining the sequence of surround-view images based on the multiple images includes: the client device arranges the multiple images according to the positions of the multiple shooting devices; An intermediate frame is inserted between every two adjacent images of an image to obtain a look-around image sequence.
如此,通过在相邻图像之间插帧,能够确定环视的连续性,避免出现图像跳跃,确保用户查看的效果的连贯性,提升用户体验。In this way, by interpolating frames between adjacent images, it is possible to determine the continuity of the look-around, avoid image jumps, ensure the continuity of the effect viewed by the user, and improve user experience.
在第一方面的一些实施例中,复合视频的第i帧是通过将多个拍摄设备在同一时刻各自拍摄的第i帧拼接而获得的。In some embodiments of the first aspect, the i-th frame of the composite video is obtained by splicing the i-th frames respectively captured by multiple shooting devices at the same moment.
第二方面,提供了一种视频呈现方法。该方法包括:中心设备接收分布式拍摄系统中多个拍摄设备各自所拍摄的视频;中心设备基于多个拍摄设备各自所拍摄的视频,获得复合视频,复合视频的第i帧是基于多个拍摄设备在同一时刻各自所拍摄的视频的第i帧而获得的,i为任意正整数;以及中心设备将复合视频发送到客户端设备。In a second aspect, a video presentation method is provided. The method includes: the central device receives the videos taken by multiple shooting devices in the distributed shooting system; the central device obtains a composite video based on the videos taken by the multiple shooting devices respectively, and the i-th frame of the composite video is based on multiple shooting The i-th frame of the video captured by the devices at the same time, where i is any positive integer; and the central device sends the composite video to the client device.
在第二方面的一些实施例中,中心设备获得复合视频包括通过下述过程确定所述复合视频的每一帧:中心设备将多个拍摄设备在同一时刻各自拍摄的视频的第i帧拼接,以获得复合视频的第i帧。In some embodiments of the second aspect, obtaining the composite video by the central device includes determining each frame of the composite video through the following process: the central device splices the i-th frame of the video captured by multiple shooting devices at the same time, to get the i-th frame of the composite video.
在第二方面的一些实施例中,还包括:中心设备呈现分布式拍摄系统中的特定拍摄设备拍摄的视频。In some embodiments of the second aspect, it further includes: the central device presenting the video captured by the specific capturing device in the distributed capturing system.
第三方面,提供了一种用于视频呈现的装置。该装置包括:接收模块,被配置为从中心设备接收复合视频,复合视频的第i帧是基于分布式拍摄系统中多个拍摄设备在同一时刻各自拍摄的视频的第i帧而获得的,i为任意正整数;确定模块,被配置为基于复合视频确定待呈现视频,待呈现视频与多个拍摄设备中的至少一个拍摄设备相关联;以及呈现模块,被配置为呈现该待呈现视频。In a third aspect, an apparatus for video presentation is provided. The device includes: a receiving module configured to receive a composite video from a central device, the i-th frame of the composite video is obtained based on the i-th frame of the video captured by multiple shooting devices in the distributed shooting system at the same time, i is any positive integer; the determination module is configured to determine the video to be presented based on the composite video, and the video to be presented is associated with at least one of the multiple shooting devices; and the presentation module is configured to present the video to be presented.
在第三方面的一些实施例中,确定模块被配置为通过下述过程确定所述待呈现视频的每一帧:从复合视频的第i帧中确定由目标拍摄设备拍摄的视频的第i帧,目标拍摄设备为多个 拍摄设备中位于目标位置处的拍摄设备;以及将目标拍摄设备拍摄的视频的第i帧确定为待呈现视频的第i帧。In some embodiments of the third aspect, the determination module is configured to determine each frame of the video to be presented through the following process: determine the i-th frame of the video captured by the target shooting device from the i-th frame of the composite video , the target shooting device is the shooting device at the target position among the multiple shooting devices; and the i-th frame of the video shot by the target shooting device is determined as the i-th frame of the video to be presented.
在第三方面的一些实施例中,接收模块还被配置为接收用户输入指令,用户输入指令指示目标拍摄设备。确定模块被配置为通过下述过程确定所述待呈现视频的每一帧:从复合视频的第i帧中确定由目标拍摄设备拍摄的视频的第i帧;以及将目标拍摄设备拍摄的视频的第i帧确定为待呈现视频的第i帧。In some embodiments of the third aspect, the receiving module is further configured to receive a user input instruction, and the user input instruction indicates a target shooting device. The determination module is configured to determine each frame of the video to be presented through the following process: determine the i-th frame of the video shot by the target shooting device from the i-th frame of the composite video; and determine the i-th frame of the video shot by the target shooting device The i-th frame is determined as the i-th frame of the video to be presented.
在第三方面的一些实施例中,接收模块还被配置为接收用户针对待呈现视频的当前帧的环视查看操作;并且呈现模块还被配置为响应于环视查看操作,呈现与待呈现视频的当前帧对应的环视图像序列。In some embodiments of the third aspect, the receiving module is further configured to receive a user's look-around operation on the current frame of the video to be presented; and the presentation module is also configured to present the current frame of the video to be presented in response to the look-around operation The look-around image sequence corresponding to the frame.
在第三方面的一些实施例中,确定模块被配置为:响应于环视查看操作,从复合视频中确定与待呈现视频的当前帧对应的帧;将所确定的复合视频中与待呈现视频的当前帧对应的帧拆分为与多个拍摄设备分别对应的多个图像;以及基于多个图像获得环视图像序列。In some embodiments of the third aspect, the determination module is configured to: in response to the look-around operation, determine from the composite video the frame corresponding to the current frame of the video to be presented; The frame corresponding to the current frame is divided into multiple images respectively corresponding to the multiple shooting devices; and a surround view image sequence is obtained based on the multiple images.
在第三方面的一些实施例中,多个图像的数目等于多个拍摄设备的数目。In some embodiments of the third aspect, the number of the plurality of images is equal to the number of the plurality of photographing devices.
在第三方面的一些实施例中,确定模块被配置为:按照多个拍摄设备的位置顺序,排列多个图像以获得环视图像序列。In some embodiments of the third aspect, the determining module is configured to: arrange the multiple images according to the sequence of positions of the multiple shooting devices to obtain a sequence of surround-view images.
在第三方面的一些实施例中,确定模块被配置为:按照多个拍摄设备的位置顺序,排列多个图像;以及通过插帧操作在多个图像的每两个相邻图像之间插入中间帧,以获得环视图像序列。In some embodiments of the third aspect, the determination module is configured to: arrange the multiple images according to the order of the positions of the multiple shooting devices; and insert a middle frame between every two adjacent images of the multiple images through frame insertion frames to obtain a sequence of look-around images.
在第三方面的一些实施例中,复合视频的第i帧是通过将多个拍摄设备在同一时刻各自拍摄的第i帧拼接而获得的。In some embodiments of the third aspect, the i-th frame of the composite video is obtained by splicing the i-th frame respectively captured by multiple shooting devices at the same moment.
第四方面,提供了一种用于视频呈现的装置。该装置包括:接收模块,被配置为接收分布式拍摄系统中多个拍摄设备各自所拍摄的视频;确定模块,被配置为基于多个拍摄设备各自所拍摄的视频,获得复合视频,复合视频的第i帧是基于多个拍摄设备在同一时刻各自所拍摄的视频的第i帧而获得的,i为任意正整数;以及发送模块,被配置为将复合视频发送到客户端设备。In a fourth aspect, an apparatus for video presentation is provided. The device includes: a receiving module configured to receive videos captured by multiple shooting devices in a distributed shooting system; a determining module configured to obtain a composite video based on the videos captured by the multiple shooting devices respectively, and the composite video The i-th frame is obtained based on the i-th frame of the video captured by multiple shooting devices at the same time, where i is any positive integer; and the sending module is configured to send the composite video to the client device.
在第四方面的一些实施例中,确定模块被配置为通过下述过程确定所述复合视频的每一帧:将多个拍摄设备在同一时刻各自拍摄的视频的第i帧拼接,以获得复合视频的第i帧。In some embodiments of the fourth aspect, the determination module is configured to determine each frame of the composite video through the following process: splicing the i-th frame of the video captured by multiple shooting devices at the same time to obtain the composite The i-th frame of the video.
在第四方面的一些实施例中,还包括呈现模块,被配置为呈现分布式拍摄系统中的特定拍摄设备拍摄的视频。In some embodiments of the fourth aspect, a presentation module is further included, configured to present the video captured by a specific capture device in the distributed capture system.
第五方面,提供了一种电子设备。该电子设备包括收发器、处理器以及存储器,该存储器上存储有由处理器执行的指令,当该指令被处理器执行时使得该电子设备实现:经由收发器从中心设备接收复合视频,复合视频的第i帧是基于分布式拍摄系统中多个拍摄设备在同一时刻各自拍摄的视频的第i帧而获得的,i为任意正整数;基于复合视频确定待呈现视频,待呈现视频与多个拍摄设备中的至少一个拍摄设备相关联;以及呈现该待呈现视频。In a fifth aspect, an electronic device is provided. The electronic device includes a transceiver, a processor, and a memory, and the memory stores instructions executed by the processor. When the instructions are executed by the processor, the electronic device realizes: receiving composite video from the central device via the transceiver, composite video The i-th frame of is obtained based on the i-th frame of the video captured by multiple shooting devices in the distributed shooting system at the same time, i is any positive integer; the video to be presented is determined based on the composite video, and the video to be presented and multiple associating with at least one of the shooting devices; and presenting the video to be presented.
在第五方面的一些实施例中,处理器执行指令使得该电子设备实现通过下述过程确定所述待呈现视频的每一帧:从复合视频的第i帧中确定由目标拍摄设备拍摄的视频的第i帧,目标拍摄设备为多个拍摄设备中位于目标位置处的拍摄设备;以及将目标拍摄设备拍摄的视频的第i帧确定为待呈现视频的第i帧。In some embodiments of the fifth aspect, the processor executes instructions to enable the electronic device to determine each frame of the video to be presented through the following process: determine the video captured by the target shooting device from the i-th frame of the composite video The i-th frame of the target shooting device is the shooting device at the target position among the multiple shooting devices; and the i-th frame of the video shot by the target shooting device is determined as the i-th frame of the video to be presented.
在第五方面的一些实施例中,处理器执行指令使得该电子设备实现:经由收发器接收用 户输入指令,用户输入指令指示目标拍摄设备;以及通过下述过程确定所述待呈现视频的每一帧:从复合视频的第i帧中确定由目标拍摄设备拍摄的视频的第i帧;以及将目标拍摄设备拍摄的视频的第i帧确定为待呈现视频的第i帧。In some embodiments of the fifth aspect, the processor executes instructions so that the electronic device realizes: receiving a user input instruction via a transceiver, the user input instruction indicating a target shooting device; and determining each of the video to be presented through the following process Frame: determining the i-th frame of the video shot by the target shooting device from the i-th frame of the composite video; and determining the i-th frame of the video shot by the target shooting device as the i-th frame of the video to be presented.
在第五方面的一些实施例中,处理器执行指令使得该电子设备实现:接收用户针对待呈现视频的当前帧的环视查看操作;以及响应于环视查看操作,呈现与待呈现视频的当前帧对应的环视图像序列。In some embodiments of the fifth aspect, the processor executes instructions so that the electronic device realizes: receiving a user's look-around view operation on the current frame of the video to be presented; and in response to the look-around view operation, presenting The look-around image sequence of .
在第五方面的一些实施例中,处理器执行指令使得该电子设备实现:响应于环视查看操作,从复合视频中确定与待呈现视频的当前帧对应的帧;将所确定的复合视频中与待呈现视频的当前帧对应的帧拆分为与多个拍摄设备分别对应的多个图像;基于多个图像获得环视图像序列;以及呈现该环视图像序列。In some embodiments of the fifth aspect, the processor executes instructions so that the electronic device realizes: in response to the look-around operation, determine from the composite video the frame corresponding to the current frame of the video to be presented; combine the determined composite video with the The frame corresponding to the current frame of the video to be presented is divided into a plurality of images respectively corresponding to the plurality of shooting devices; a surround-view image sequence is obtained based on the plurality of images; and the surround-view image sequence is presented.
在第五方面的一些实施例中,多个图像的数目等于多个拍摄设备的数目。In some embodiments of the fifth aspect, the number of the plurality of images is equal to the number of the plurality of capture devices.
在第五方面的一些实施例中,处理器执行指令使得该电子设备实现:按照多个拍摄设备的位置顺序,排列多个图像以获得环视图像序列。In some embodiments of the fifth aspect, the processor executes instructions so that the electronic device implements: arranging multiple images according to the order of positions of the multiple shooting devices to obtain a sequence of surround-view images.
在第五方面的一些实施例中,处理器执行指令使得该电子设备实现:按照多个拍摄设备的位置顺序,排列多个图像;以及通过插帧操作在多个图像的每两个相邻图像之间插入中间帧,以获得环视图像序列。In some embodiments of the fifth aspect, the processor executes instructions so that the electronic device realizes: arranging a plurality of images according to the order of positions of the plurality of shooting devices; Insert in-between frames to obtain a sequence of look-around images.
在第五方面的一些实施例中,复合视频的第i帧是通过将多个拍摄设备在同一时刻各自拍摄的视频的第i帧拼接而获得的。In some embodiments of the fifth aspect, the i-th frame of the composite video is obtained by splicing the i-th frame of the video captured by multiple shooting devices at the same moment.
在第五方面的一些实施例中,电子设备包括显示屏,用于呈现该待呈现视频或者环视图像序列。In some embodiments of the fifth aspect, the electronic device includes a display screen for presenting the video to be presented or the surround view image sequence.
第六方面,提供了一种电子设备。该电子设备包括收发器、处理器以及存储器,该存储器上存储有由处理器执行的指令,当该指令被处理器执行时使得该电子设备实现:经由收发器接收分布式拍摄系统中多个拍摄设备各自所拍摄的视频;基于多个拍摄设备各自所拍摄的视频,获得复合视频,复合视频的第i帧是基于多个拍摄设备在同一时刻各自所拍摄的视频的第i帧而获得的,i为任意正整数;以及经由收发器将复合视频发送到客户端设备。In a sixth aspect, an electronic device is provided. The electronic device includes a transceiver, a processor, and a memory. The memory stores instructions executed by the processor. When the instructions are executed by the processor, the electronic device realizes: receiving multiple photographs in a distributed photographing system via the transceiver. The videos taken by the devices respectively; based on the videos taken by the multiple shooting devices, the composite video is obtained, and the i-th frame of the composite video is obtained based on the i-th frame of the videos shot by the multiple shooting devices at the same time, i is any positive integer; and sending the composite video to the client device via the transceiver.
在第六方面的一些实施例中,处理器执行指令使得该电子设备实现通过下述过程确定所述复合视频的每一帧:将多个拍摄设备在同一时刻各自拍摄的视频的第i帧拼接,以获得复合视频的第i帧。In some embodiments of the sixth aspect, the processor executes instructions to enable the electronic device to determine each frame of the composite video through the following process: splicing the i-th frame of the video captured by multiple shooting devices at the same time , to get the i-th frame of the composite video.
在第六方面的一些实施例中,处理器执行指令使得该电子设备实现:呈现分布式拍摄系统中的特定拍摄设备拍摄的视频。In some embodiments of the sixth aspect, the processor executes instructions so that the electronic device implements: presenting the video captured by a specific capture device in the distributed capture system.
在第六方面的一些实施例中,该电子设备包括拍摄设备。In some embodiments of the sixth aspect, the electronic device includes a photographing device.
第七方面,提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现根据上述第一方面或第二方面或其任一实现方式所述的方法的操作。In a seventh aspect, a computer-readable storage medium is provided, on which a computer program is stored. When the computer program is executed by a processor, the above-mentioned first or second aspect or any implementation manner thereof is implemented. operation of the method described.
第八方面,提供了一种芯片或芯片系统。该芯片或芯片系统包括处理电路,被配置为执行根据上述第一方面或第二方面或其任一实现方式所述的方法的操作。In an eighth aspect, a chip or a chip system is provided. The chip or chip system includes a processing circuit configured to perform the operations of the method according to the first aspect or the second aspect or any implementation thereof.
第九方面,提供了一种计算机程序或计算机程序产品。该计算机程序或计算机程序产品被有形地存储在计算机可读介质上并且包括计算机可执行指令,计算机可执行指令在计算机上运行时,使得计算机执行根据上述第一方面或第二方面或其任一实现方式所述的方法的操 作。In a ninth aspect, a computer program or computer program product is provided. The computer program or computer program product is tangibly stored on a computer-readable medium and includes computer-executable instructions, which, when run on a computer, cause the computer to perform The operations of the methods described in the manner are implemented.
附图说明Description of drawings
结合附图并参考以下详细说明,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。在附图中,相同或相似的附图标注表示相同或相似的元素,其中:The above and other features, advantages and aspects of the various embodiments of the present disclosure will become more apparent with reference to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, the same or similar reference numerals indicate the same or similar elements, wherein:
图1示出了可以在其中实现本公开的实施例的示例环境的一个示意图;Figure 1 shows a schematic diagram of an example environment in which embodiments of the present disclosure may be implemented;
图2示出了可以在其中实现本公开的实施例的示例场景的一个示意图;Figure 2 shows a schematic diagram of an example scenario in which embodiments of the present disclosure may be implemented;
图3示出了根据本公开的一些实施例的视频呈现过程的示意交互图;Fig. 3 shows a schematic interaction diagram of a video rendering process according to some embodiments of the present disclosure;
图4示出了根据本公开的一些实施例的确定复合视频的过程的示意流程图;FIG. 4 shows a schematic flowchart of a process of determining a composite video according to some embodiments of the present disclosure;
图5示出了根据本公开的一些实施例的复合视频的确定方式的一个示意图;Fig. 5 shows a schematic diagram of a method of determining a composite video according to some embodiments of the present disclosure;
图6示出了根据本公开的一些实施例的呈现环视图像序列的过程的示意流程图;Fig. 6 shows a schematic flowchart of a process of presenting a sequence of surround-view images according to some embodiments of the present disclosure;
图7示出了根据本公开的一些实施例的插帧的示意图;Fig. 7 shows a schematic diagram of frame insertion according to some embodiments of the present disclosure;
图8示出了根据本公开的一些实施例的视频呈现过程的示意流程图;Fig. 8 shows a schematic flowchart of a video rendering process according to some embodiments of the present disclosure;
图9示出了根据本公开的一些实施例的视频呈现过程的示意流程图;Fig. 9 shows a schematic flowchart of a video rendering process according to some embodiments of the present disclosure;
图10示出了根据本公开的一些实施例的用于视频呈现的装置的示意框图;Fig. 10 shows a schematic block diagram of an apparatus for video presentation according to some embodiments of the present disclosure;
图11示出了根据本公开的一些实施例的用于视频呈现的装置的另一示意框图;以及FIG. 11 shows another schematic block diagram of an apparatus for video presentation according to some embodiments of the present disclosure; and
图12出了可以用来实施本公开的实施例的示例设备的示意性框图。Figure 12 shows a schematic block diagram of an example device that may be used to implement embodiments of the present disclosure.
具体实施方式Detailed ways
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. Although certain embodiments of the present disclosure are shown in the drawings, it should be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein; A more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for exemplary purposes only, and are not intended to limit the protection scope of the present disclosure.
在本公开的实施例的描述中,术语“包括”及其类似用语应当理解为开放性包含,即“包括但不限于”。术语“基于”应当理解为“至少部分地基于”。术语“一个实施例”或“该实施例”应当理解为“至少一个实施例”。术语“第一”、“第二”等等可以指代不同的或相同的对象。下文还可能包括其他明确的和隐含的定义。In the description of the embodiments of the present disclosure, the term "comprising" and its similar expressions should be interpreted as an open inclusion, that is, "including but not limited to". The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be read as "at least one embodiment". The terms "first", "second", etc. may refer to different or the same object. Other definitions, both express and implied, may also be included below.
分布式拍摄系统可以包括由至少两个拍摄设备组成的拍摄阵列,拍摄设备可以拍摄图像或视频,拍摄设备可以被称为图像采集设备等,分布式拍摄系统也可以被称为分布式相机系统或分布式拍摄阵列或分布式图像采集系统等。应理解的是,本公开的实施例中的“图像”可以是由拍摄设备所拍摄的图像或者可以是由像拍摄设备所拍摄的视频的帧。另外,视频也可以被称为图像流、帧流、视频流、媒体流等等,本公开对此不限定。The distributed shooting system can include a shooting array composed of at least two shooting devices, the shooting devices can take images or videos, and the shooting devices can be called image acquisition devices, etc. The distributed shooting system can also be called a distributed camera system or Distributed shooting array or distributed image acquisition system, etc. It should be understood that the "image" in the embodiments of the present disclosure may be an image captured by a capture device or may be a frame of a video captured by an image capture device. In addition, video may also be referred to as image stream, frame stream, video stream, media stream, etc., which is not limited in the present disclosure.
分布式拍摄系统中的至少两个拍摄设备可以同时拍摄以获取更多的视觉信息,进而可以通过相互协同以实现多路图像同屏显示、多路图像实时拼接等。分布式拍摄系统可以被实现在各种不同的场景中。例如,分布式拍摄设备系统可以被实现为环视拍摄阵列,系统中的多个拍摄设备可以以一定的角度和间距环绕目标物体进行布置,每个拍摄设备负责一定视野内的目标物体的拍摄,这样在顺次播放各个拍摄设备拍摄的图像时,就如同人眼以目标物体为中心,沿着一个方向的弧线从不同的角度观察该目标物体一样。At least two shooting devices in the distributed shooting system can shoot at the same time to obtain more visual information, and then can realize multi-channel image display on the same screen and real-time stitching of multiple images through mutual cooperation. A distributed camera system can be implemented in various scenarios. For example, the distributed shooting device system can be implemented as a surround-view shooting array, and multiple shooting devices in the system can be arranged around the target object at a certain angle and distance, and each shooting device is responsible for shooting the target object within a certain field of view, so that When the images captured by each shooting device are played sequentially, it is as if the human eye takes the target object as the center and observes the target object from different angles along an arc in one direction.
分布式拍摄系统还被应用在了网络直播中。在网络直播的场景中,分布式拍摄系统的多 个拍摄设备可以分别进行拍摄,主播可以选择要呈现的视频以便顾客通过客户端设备进行查看。在该过程中,顾客查看的内容是由主播确定的,也就是说客户端设备只是视频的被动接收方,这样导致所有的顾客从各自不同的客户端所看到的直播内容是完全一样的。甚至,如果主播不进行拍摄设备切换的话,还会造成分布式拍摄系统中其他拍摄设备拍摄的视频的浪费,不能充分利用多个拍摄设备拍摄到的视频。这样,严重影响了用户观看直播的体验和感受。The distributed shooting system has also been applied in the webcast. In the scene of webcasting, multiple shooting devices of the distributed shooting system can shoot separately, and the anchor can choose the video to be presented so that customers can view it through the client device. In this process, the content that the customer views is determined by the anchor, that is to say, the client device is only a passive receiver of the video, so that all customers see exactly the same live content from different clients. Even if the anchor does not switch the shooting device, it will cause waste of videos shot by other shooting devices in the distributed shooting system, and the videos shot by multiple shooting devices cannot be fully utilized. In this way, the experience and feelings of the user watching the live broadcast are seriously affected.
有鉴于此,本公开提供了一种视频呈现的方案,客户端设备可以接收由多个拍摄设备拍摄的视频所获得的复合视频,再基于复合视频进行本地呈现,如此能够满足不同用户对于呈现的不同需求,提升了用户体验。In view of this, the present disclosure provides a video presentation solution. The client device can receive the composite video obtained from the videos captured by multiple shooting devices, and then perform local presentation based on the composite video, which can satisfy different users' requirements for presentation. Different needs improve user experience.
图1示出了可以在其中实现本公开的实施例的示例环境100的一个示意图。在示例环境100中,示出了中心设备110、分布式拍摄系统120、客户端设备130-1、客户端设备130-2、…和客户端设备130-N。分布式拍摄系统120包括多个拍摄设备,例如图1中的拍摄设备122-1、拍摄设备122-2和拍摄设备122-3。FIG. 1 shows a schematic diagram of an example environment 100 in which embodiments of the present disclosure may be implemented. In the example environment 100, a central device 110, a distributed camera system 120, a client device 130-1, a client device 130-2, . . . and a client device 130-N are shown. The distributed shooting system 120 includes multiple shooting devices, such as the shooting device 122-1, the shooting device 122-2, and the shooting device 122-3 in FIG. 1 .
为了下文表述的方便,本公开实施例中将拍摄设备122-1、拍摄设备122-2和拍摄设备122-3统称为拍摄设备122,本公开实施例中将客户端设备130-1、客户端设备130-2、…和客户端设备130-N统称为客户端设备130。并且可理解,尽管图1中示出了分布式拍摄系统120包括三个拍摄设备,但是在实际应用中,可以根据场景等来设定分布式拍摄系统120所包括的拍摄设备的数目。拍摄设备122可以为独立的设备或者可以为其他电子设备的外设等,例如拍摄设备122可以被实现为具有图像采集功能的电子设备等。拍摄设备122可以包括相机、摄像机、抓拍设备、手机、平板电脑、可穿戴设备等,本公开对此不限定。For the convenience of the following description, the shooting device 122-1, the shooting device 122-2, and the shooting device 122-3 are collectively referred to as the shooting device 122 in the embodiment of the present disclosure, and the client device 130-1, client device 130-1, and Devices 130 - 2 , . . . and client devices 130 -N are collectively referred to as client devices 130 . And it can be understood that although it is shown in FIG. 1 that the distributed shooting system 120 includes three shooting devices, in practical applications, the number of shooting devices included in the distributed shooting system 120 can be set according to scenarios and the like. The photographing device 122 may be an independent device or may be a peripheral device of other electronic devices, for example, the photographing device 122 may be implemented as an electronic device having an image acquisition function and the like. The shooting device 122 may include a camera, a video camera, a capture device, a mobile phone, a tablet computer, a wearable device, etc., which is not limited in the present disclosure.
如图1所示,中心设备110可以与客户端设备130进行交互,在诸如网络直播的场景中,中心设备110可以是与主播进行交互的电子设备,客户端设备130可以是与用户进行交互的电子设备。也就是说,中心设备110与客户端设备130之间具有通信连接。As shown in FIG. 1, the central device 110 can interact with the client device 130. In a scene such as a webcast, the central device 110 can be an electronic device that interacts with the host, and the client device 130 can be an electronic device that interacts with the user. Electronic equipment. That is to say, there is a communication connection between the central device 110 and the client device 130 .
在一些实施例中,中心设备110可以经由服务器(例如流媒体服务器)与客户端设备130进行交互。在一些实施例中,中心设备110与客户端设备130可以被实现为诸如智能手机、平板电脑、可穿戴设备等电子设备。本公开的实施例对客户端设备的数目(N)不作限定,例如在网络直播的场景中,客户端设备的数目的量级可以是百、万甚至更大。In some embodiments, the central device 110 may interact with the client device 130 via a server (eg, a streaming server). In some embodiments, the central device 110 and the client device 130 may be implemented as electronic devices such as smart phones, tablet computers, wearable devices and the like. Embodiments of the present disclosure do not limit the number (N) of client devices. For example, in a live network scenario, the number of client devices may be hundreds, thousands or even larger.
如图1所示,中心设备110可以与分布式拍摄系统120进行交互,例如分别由拍摄设备122-1、拍摄设备122-2和拍摄设备122-3拍摄的图像或视频可以被传输到中心设备110。As shown in Figure 1, the central device 110 can interact with the distributed shooting system 120, for example, the images or videos captured by the shooting device 122-1, the shooting device 122-2 and the shooting device 122-3 can be transmitted to the central device 110.
中心设备110与拍摄设备122之间可以具有通信连接,且本公开的实施例对连接方式不作限定,例如以有线或无线方式进行连接。有线方式可以包括但不限于光纤连接、通用串行总线(Universal Serial Bus,USB)连接等,无线方式可以包括但不限于移动通信技术(包括但不限于2G,3G,4G,5G,6G等、Wi-Fi、蓝牙(Bluetooth)、点对点(Point to Point,P2P)等。There may be a communication connection between the central device 110 and the photographing device 122, and the embodiment of the present disclosure does not limit the connection manner, for example, the connection is in a wired or wireless manner. Wired methods may include but not limited to optical fiber connections, Universal Serial Bus (Universal Serial Bus, USB) connections, etc. Wireless methods may include but not limited to mobile communication technologies (including but not limited to 2G, 3G, 4G, 5G, 6G, etc., Wi-Fi, Bluetooth (Bluetooth), Point to Point (P2P), etc.
以Wi-Fi连接为例,中心设备110与拍摄设备122可以处于同一局域网环境中,中心设备110可以通过其分布式拍摄系统控制模块(或连接发现模块或其他模块等)发现位于同一局域网环境中的拍摄设备122,并与拍摄设备122建立Wi-Fi连接,例如中心设备110与拍摄设备122可以连接到同一路由器。应注意,中心设备110与不同的拍摄设备122之间的通信连接方式可以相同,也可以不同。举例来讲,中心设备110与拍摄设备122-1之间的连接方 式可以不同于中心设备110与拍摄设备122-2之间的连接方式。Taking Wi-Fi connection as an example, the central device 110 and the photographing device 122 can be in the same local area network environment, and the central device 110 can discover that they are located in the same local area network environment through its distributed photographing system control module (or connection discovery module or other modules, etc.). and the shooting device 122, and establish a Wi-Fi connection with the shooting device 122, for example, the central device 110 and the shooting device 122 can be connected to the same router. It should be noted that the communication connection modes between the central device 110 and different shooting devices 122 may be the same or different. For example, the connection mode between the central device 110 and the shooting device 122-1 may be different from the connection mode between the central device 110 and the shooting device 122-2.
尽管图1中示出了中心设备110是独立于分布式拍摄系统120的设备,但是在一些实施例中,中心设备110可以实现为分布式拍摄系统120的一部分,例如中心设备110可以为拍摄设备122-2对应的电子设备。Although it is shown in FIG. 1 that the central device 110 is a device independent of the distributed photographing system 120, in some embodiments, the central device 110 can be implemented as a part of the distributed photographing system 120, for example, the central device 110 can be a photographing device 122-2 corresponding electronic equipment.
本公开的实施例对分布式拍摄系统120中各个拍摄设备的排布方式不作限定。Embodiments of the present disclosure do not limit the arrangement of each photographing device in the distributed photographing system 120 .
在一些实施例中,拍摄设备122-1、拍摄设备122-2和拍摄设备122-3可以并排布置,从而在拍摄目标物体时,对目标物体的拍摄方向平行或基本一致。In some embodiments, the photographing device 122-1, the photographing device 122-2, and the photographing device 122-3 may be arranged side by side, so that when photographing the target object, the photographing directions of the target object are parallel or substantially consistent.
在一些实施例中,拍摄设备122-1、拍摄设备122-2和拍摄设备122-3可以环绕目标物体进行布置,从而在拍摄目标物体时,对目标物体的拍摄方向成一定角度。结合图2,假设分布式拍摄系统120包括7个拍摄设备,分别为拍摄设备122-1至122-7,并且拍摄设备122-4对应的电子设备为中心设备110。In some embodiments, the photographing device 122-1, the photographing device 122-2, and the photographing device 122-3 may be arranged around the target object, so that when photographing the target object, the photographing direction of the target object forms a certain angle. Referring to FIG. 2 , it is assumed that the distributed shooting system 120 includes seven shooting devices, namely shooting devices 122 - 1 to 122 - 7 , and the electronic device corresponding to the shooting device 122 - 4 is the central device 110 .
在图2所示的场景200中,7个拍摄设备可以被安装在固定支架201上,并且7个拍摄设备中每个拍摄设备可以对目标物体202进行图像拍摄。在诸如网络直播等场景中,目标物体202可以是主播所要展示的物品。In the scene 200 shown in FIG. 2 , seven photographing devices may be installed on a fixed bracket 201 , and each of the seven photographing devices may capture an image of a target object 202 . In scenarios such as webcasting, the target object 202 may be an item to be displayed by the host.
在图2中,固定支架201被实现为环形支架,并且固定支架201上设置有多个固定卡扣,用于固定多个拍摄设备122。并且当将多个拍摄设备122分别安装在固定支架201上之后,可以使得各个拍摄设备122相对于固定支架201的中心(如环形所在圆的圆心)的位置和角度是固定的。可理解的是,尽管图2中的多个拍摄设备122可以朝向固定支架201的中心拍摄,也就是说,目标物体202位于固定支架201的中心附近。但是本公开的实施例不限于此,例如多个拍摄设备122也可以朝向固定支架201的外部进行拍摄,这样能够扩大视野,从而进行全景直播。In FIG. 2 , the fixing bracket 201 is realized as a ring bracket, and a plurality of fixing buckles are arranged on the fixing bracket 201 for fixing a plurality of photographing devices 122 . And when multiple shooting devices 122 are installed on the fixed bracket 201 respectively, the position and angle of each shooting device 122 relative to the center of the fixed bracket 201 (such as the center of the circle where the ring is located) can be fixed. It can be understood that although the plurality of shooting devices 122 in FIG. 2 can shoot towards the center of the fixed bracket 201 , that is to say, the target object 202 is located near the center of the fixed bracket 201 . However, the embodiments of the present disclosure are not limited thereto. For example, multiple shooting devices 122 may also shoot outside the fixed bracket 201 , so that the field of view can be expanded, so as to perform panoramic live broadcast.
在一些实施例中,图2中的7个拍摄设备中每个拍摄设备可以被实现为智能终端上的相机,例如在智能终端为手机的情形下,可以将7个手机安装在固定支架201的对应位置处。在一个实施例中,7个拍摄设备被分别安装在固定支架201上,从而7个拍摄设备拍摄目标物体202的拍摄角度被固定,即每个拍摄设备都不可移动且不可旋转。在一些实施例中,每两个相邻的拍摄设备的中心线的夹角可以是固定的,例如该夹角可以被设定为20°或其他值。In some embodiments, each of the seven shooting devices in FIG. 2 can be implemented as a camera on a smart terminal. For example, in the case where the smart terminal is a mobile phone, the seven mobile phones can be installed on at the corresponding location. In one embodiment, seven shooting devices are installed on the fixed bracket 201 respectively, so that the shooting angles of the seven shooting devices to shoot the target object 202 are fixed, that is, each shooting device cannot be moved or rotated. In some embodiments, the included angle between the centerlines of every two adjacent shooting devices may be fixed, for example, the included angle may be set to 20° or other values.
本公开的实施例中,分布式拍摄系统120中相邻两个拍摄设备的拍摄区域可以具有部分重叠。举例而言,拍摄设备122-1拍摄目标物体202得到第一图像,拍摄设备122-2拍摄目标物体202得到第二图像,且第一图像中第一区域与第二图像中第二区域针对的是目标物体202的同一拍摄区域。在一个示例中,第一区域占第一图像的1/4或更大,第二区域占第二图像的1/4或更大。In an embodiment of the present disclosure, the shooting areas of two adjacent shooting devices in the distributed shooting system 120 may partially overlap. For example, the shooting device 122-1 shoots the target object 202 to obtain a first image, and the shooting device 122-2 shoots the target object 202 to obtain a second image, and the first area in the first image and the second area in the second image are for are the same shooting area of the target object 202 . In one example, the first area occupies 1/4 or more of the first image, and the second area occupies 1/4 or more of the second image.
本公开的实施例可以应用于网络直播的场景。为了进行网络直播,主播可以预先准备并固定各个拍摄设备122以形成分布式拍摄系统。例如以如图2所示的方式,将多个拍摄设备122安装在固定支架201上。主播还可以选定中心设备110,例如将拍摄设备122-4对应的电子设备设定为中心设备110。随后,主播可以通过中心设备110创建直播间,例如通过中心设备110连接直播平台的服务器以创建直播间。中心设备110可以向直播平台的服务器请求获取推流地址,如统一资源定位符(Uniform Resource Locator,URL)。推流可以是中心设备110向直播平台的服务器推送音视频流的过程,推流地址是与推流过程相对应的地址,该推流地址的格式等取决于所使用的协议等。类似地,客户端设备130可以通过与推流地址对应 的拉流地址从直播平台的服务器获取相应的音视频流,其中拉流可以是客户端设备130将直播平台的服务器上的音视频流拉到本地的过程,拉流地址是与拉流过程相对应的地址,该拉流地址的格式等取决于所使用的协议等。可理解的是,与中心设备110所连接的客户端设备130的数目可能是随着顾客进入或退出直播间的操作而变动的。The embodiments of the present disclosure can be applied to the scene of network live broadcast. For webcasting, the host can prepare and fix various shooting devices 122 in advance to form a distributed shooting system. For example, in the manner shown in FIG. 2 , multiple photographing devices 122 are installed on the fixed bracket 201 . The anchor can also select the central device 110 , for example, set the electronic device corresponding to the shooting device 122 - 4 as the central device 110 . Subsequently, the anchor can create a live broadcast room through the central device 110, for example, connect to a server of the live broadcast platform through the central device 110 to create a live broadcast room. The central device 110 may request a streaming address, such as a Uniform Resource Locator (URL), from the server of the live broadcast platform. Stream push can be a process in which the central device 110 pushes audio and video streams to the server of the live broadcast platform, and the stream push address is an address corresponding to the stream push process, and the format of the stream push address depends on the protocol used. Similarly, the client device 130 can obtain the corresponding audio and video stream from the server of the live broadcast platform through the pull stream address corresponding to the push stream address, where the pull stream can be that the client device 130 pulls the audio and video stream on the server of the live broadcast platform To the local process, the streaming address is the address corresponding to the streaming process, and the format of the streaming address depends on the protocol used. It can be understood that the number of client devices 130 connected to the central device 110 may change as customers enter or exit the live broadcast room.
可理解的是,可以通过用户进入直播间的操作,使得客户端设备130与中心设备110建立通信连接。在一些实施例中,中心设备110可以将直播间信息发送到客户端设备130。在一些实施例中,客户端设备130可以向直播平台服务器发送信息请求,进而获取来自中心设备110的直播间信息。It can be understood that the client device 130 can establish a communication connection with the central device 110 through the user's operation of entering the live broadcast room. In some embodiments, the central device 110 can send the live room information to the client device 130 . In some embodiments, the client device 130 may send an information request to the live broadcast platform server, and then obtain the live room information from the central device 110 .
示例性地,直播间信息可以包括分布式拍摄系统的系统信息。在一些实施例中,分布式拍摄系统的系统信息可以包括分布式拍摄系统所包括的拍摄设备的数目。例如在如图2所示的场景中,该数目为7。在一些实施例中,分布式拍摄系统的系统信息可以包括各个拍摄设备所拍摄的图像的尺寸,例如宽和高。举例来讲,如果各个拍摄设备拍摄的图像尺寸相等,那么可以包括w和h分别表示单个拍摄设备拍摄的图像的宽和高。在一些实施例中,分布式拍摄系统的系统信息可以包括与中心设备110相关联的拍摄设备的标识。例如在如图2所示的场景中,中心设备110为拍摄设备122-4对应的电子设备,那么与中心设备110相关联的拍摄设备的标识可以为4。可理解,分布式拍摄系统的系统信息还可以包括其他信息等,诸如拍摄设备的分辨率等,这里不再一一罗列。Exemplarily, the live room information may include system information of the distributed shooting system. In some embodiments, the system information of the distributed shooting system may include the number of shooting devices included in the distributed shooting system. For example, in the scenario shown in FIG. 2, the number is seven. In some embodiments, the system information of the distributed shooting system may include the size of images captured by each shooting device, such as width and height. For example, if the images captured by each capturing device have the same size, then w and h may be included to represent the width and height of the image captured by a single capturing device, respectively. In some embodiments, the system information of the distributed camera system may include the identification of the camera device associated with the central device 110 . For example, in the scenario shown in FIG. 2 , the central device 110 is an electronic device corresponding to the photographing device 122 - 4 , and the identifier of the photographing device associated with the central device 110 may be 4 . It can be understood that the system information of the distributed shooting system may also include other information, such as the resolution of the shooting device, etc., which will not be listed here.
示例性地,直播间信息还可以包括对于客户端设备130而言的拉流地址,进而客户端设备130可以通过该拉流地址获取视频。可理解,直播间信息还可以包括其他信息等,诸如直播间地址、直播播放时间等,这里不再一一罗列。Exemplarily, the live broadcast room information may also include a streaming address for the client device 130, and then the client device 130 may obtain the video through the streaming address. It can be understood that the information of the live broadcast room may also include other information, such as the address of the live broadcast room, the broadcast time of the live broadcast, etc., which will not be listed here.
下面将结合图3至图7对本公开的实施例进行较为详细的阐述。The embodiments of the present disclosure will be described in detail below with reference to FIG. 3 to FIG. 7 .
图3示出了根据本公开的一些实施例的视频呈现过程300的示意交互图。图3所示的过程300涉及中心设备110和客户端设备130。FIG. 3 shows a schematic interaction diagram of a video rendering process 300 according to some embodiments of the present disclosure. Process 300 shown in FIG. 3 involves central device 110 and client device 130 .
在过程300中,中心设备110基于由分布式拍摄系统120的多个拍摄设备122分别拍摄的多个视频,确定310复合视频。In the process 300 , the central device 110 determines 310 a composite video based on the multiple videos respectively captured by the multiple capture devices 122 of the distributed capture system 120 .
示例性地,中心设备110确定复合视频的过程可以参照图4所示,图4示出了中心设备110确定复合视频的过程400的示意流程图。Exemplarily, the process of determining the composite video by the central device 110 may refer to FIG. 4 , which shows a schematic flowchart of a process 400 of determining the composite video by the central device 110 .
在过程400中,拍摄设备122进行视频拍摄410。以3个拍摄设备为例,可以假设拍摄设备122-1拍摄得到视频1,假设拍摄设备122-2拍摄得到视频2,假设拍摄设备122-3拍摄得到视频3。In process 400 , video capture 410 is performed by capture device 122 . Taking three shooting devices as an example, it can be assumed that the shooting device 122-1 shoots to get video 1, assumes that the shooting device 122-2 shoots to get video 2, and assumes that the shooting device 122-3 shoots to get video 3.
可选地或者附加地,中心设备110与拍摄设备122可以进行时间同步402。本公开的实施例对时间同步的具体方式不作限定。在一些实施例中,通过时间同步,可以确定中心设备110的本地时钟与拍摄设备122的本地时钟之间的时间同步信息。在一些实施例中,经过时间同步之后,不同的拍摄设备122可以在同样的时间进行图像拍摄。例如,拍摄设备122-1所拍摄的第i帧和拍摄设备122-2所拍摄的第i帧是同时获取的。Optionally or additionally, the central device 110 and the shooting device 122 may perform time synchronization 402 . Embodiments of the present disclosure do not limit the specific manner of time synchronization. In some embodiments, through time synchronization, time synchronization information between the local clock of the central device 110 and the local clock of the photographing device 122 can be determined. In some embodiments, after time synchronization, different capturing devices 122 can capture images at the same time. For example, the i-th frame captured by the capturing device 122-1 and the i-th frame captured by the capturing device 122-2 are acquired simultaneously.
在过程400中,拍摄设备122将拍摄的视频发送420到中心设备110。以3个拍摄设备为例,中心设备110可以获取视频1、视频2和视频3。In the process 400 , the shooting device 122 sends 420 the captured video to the central device 110 . Taking three shooting devices as an example, the central device 110 may acquire video 1, video 2 and video 3.
可选地或者附加地,中心设备110可以对来自拍摄设备122的视频进行预处理422。在一些实施例中,预处理可以是针对部分视频或全部视频。在一些实施例中,预处理可以包括 但不限于:美颜、加水印、打马赛克等。作为一例,加水印可以包括在视频的部分或全部帧上添加如下信息中的全部或部分:主播名称、目标物体的信息、拍摄设备122的标识等。Optionally or additionally, the central device 110 may perform preprocessing 422 on the video from the shooting device 122 . In some embodiments, pre-processing may be for some or all of the video. In some embodiments, preprocessing may include, but is not limited to: beautification, watermarking, mosaicing, etc. As an example, watermarking may include adding all or part of the following information to some or all frames of the video: anchor name, information of the target object, identification of the shooting device 122, and the like.
在过程400中,中心设备110获得430复合视频。以3个拍摄设备为例,中心设备110可以将视频1、视频2和视频3进行合成,以得到复合视频。In process 400, central device 110 obtains 430 composite video. Taking three shooting devices as an example, the central device 110 can synthesize video 1, video 2 and video 3 to obtain a composite video.
在本公开的实施例中,复合视频的第i帧可以是基于拍摄设备122所拍摄的第i帧而获得的。在一些实施例中,可以将各个拍摄设备122所拍摄的第i帧进行合成,以得到复合视频的第i帧。i为任意正整数,这样通过得到复合视频的每一帧,进而可以得到复合视频。In the embodiment of the present disclosure, the i-th frame of the composite video may be obtained based on the i-th frame captured by the capturing device 122 . In some embodiments, the i-th frame captured by each shooting device 122 may be combined to obtain the i-th frame of the composite video. i is any positive integer, so that by obtaining each frame of the composite video, the composite video can be obtained.
在一些实施例中,可以按照拍摄设备122的顺序,将各个拍摄设备122所拍摄的第i帧进行拼接,以得到复合视频的第i帧。举例来讲,假设各个拍摄设备122所拍摄的第i帧具有相同的尺寸,例如宽为w,高为h。那么复合视频的第i帧的宽可以等于各个拍摄设备122所拍摄的第i帧的宽之和,复合视频的第i帧的高等于h。在一些实施例中,拍摄设备122的顺序可以是:以目标物体202为基准,顺时针方向排布的拍摄设备的顺序。结合图2,以目标物体202为中心点,拍摄设备122的顺时针的顺序为:拍摄设备122-1、拍摄设备122-2、拍摄设备122-3、拍摄设备122-4、拍摄设备122-5、拍摄设备122-6和拍摄设备122-7。In some embodiments, according to the order of the shooting devices 122, the i-th frame captured by each shooting device 122 may be spliced to obtain the i-th frame of the composite video. For example, assume that the i-th frame captured by each capturing device 122 has the same size, for example, the width is w and the height is h. Then the width of the i-th frame of the composite video can be equal to the sum of the widths of the i-th frames captured by each shooting device 122, and the height of the i-th frame of the composite video is equal to h. In some embodiments, the order of the photographing devices 122 may be: taking the target object 202 as a reference, the order of the photographing devices arranged clockwise. Referring to FIG. 2 , with the target object 202 as the center point, the clockwise order of the photographing devices 122 is: photographing device 122-1, photographing device 122-2, photographing device 122-3, photographing device 122-4, photographing device 122- 5. The shooting device 122-6 and the shooting device 122-7.
为了简化描述,以3个拍摄设备为例描述获得复合视频的过程。具体地,将拍摄设备122-1拍摄得到的视频1中的各帧表示为f11、f12、…、f1n,将拍摄设备122-2拍摄得到的视频2中的各帧表示为f21、f22、…、f2n,将拍摄设备122-3拍摄得到的视频3中的各帧表示为f31、f32、…、f3n。那么如图5所示,复合视频的第1帧可以是将f11、f21和f31顺序拼接而成,…,复合视频的第n帧可以是将f1n、f2n和f3n顺序拼接而成。In order to simplify the description, three shooting devices are taken as an example to describe the process of obtaining composite video. Specifically, each frame in the video 1 captured by the shooting device 122-1 is represented as f11, f12, ..., f1n, and each frame in the video 2 captured by the shooting device 122-2 is represented as f21, f22, ... , f2n, denote each frame in the video 3 captured by the shooting device 122-3 as f31, f32, . . . , f3n. Then, as shown in FIG. 5, the first frame of the composite video can be formed by sequentially splicing f11, f21 and f31, ..., and the nth frame of the composite video can be formed by sequentially splicing f1n, f2n and f3n.
换句话说,中心设备110可以从多个视频中,按照拍摄设备122的顺序从对应的视频中分别取第一帧,在将所有视频的第一帧取完之后,将多个第一帧按照拍摄设备122的顺序进行拼接,作为复合视频的第一帧。然后中心设备110可以从多个视频中,按照拍摄设备122的顺序从对应的视频中分别取第二帧,在将所有视频的第二帧取完之后,将多个第二帧按照拍摄设备122的顺序进行拼接,作为复合视频的第二帧。如此循环,便可以得到复合视频。In other words, the central device 110 can take the first frames from the corresponding videos in the order of the shooting device 122 from the multiple videos, and after the first frames of all the videos are taken, the multiple first frames are sorted according to The sequences of the shooting devices 122 are spliced as the first frame of the composite video. Then the central device 110 can take the second frames from the corresponding videos in the order of the shooting device 122 from the plurality of videos, and after taking the second frames of all the videos, the multiple second frames can be taken according to the sequence of the shooting device 122 The order of splicing, as the second frame of the composite video. By looping like this, a composite video can be obtained.
可理解的是,上面的实施例对得到复合视频的方式仅是示意,本公开的实施例对此不限定。举例而言,可以按照拍摄设备122的逆序排序来合成,或者,可以按照其他预定的规则进行排序来合成。举例而言,可以沿宽度方向或者沿高度方向进行拼接,或者也可以按照其他的方式进行拼接。It can be understood that, the above embodiments are only illustrative of the way to obtain the composite video, which is not limited by the embodiments of the present disclosure. For example, they may be synthesized according to the reverse order of the photographing devices 122, or may be synthesized according to other predetermined rules. For example, splicing can be performed along the width direction or along the height direction, or can also be spliced in other ways.
在过程300中,中心设备110将复合视频发送320到客户端设备130。In process 300 , central device 110 sends 320 composite video to client device 130 .
在一些实施例中,客户端设备130可以通过拉流地址,经由流媒体服务器获取该复合视频。In some embodiments, the client device 130 can obtain the composite video via the streaming server by pulling the streaming address.
在一些实施例中,中心设备110可以将复合视频进行编码压缩、封装等之后再发送到客户端设备130。示例性地,可以采用H.264等视频压缩技术进行编码压缩。示例性地,可以将视频封装为FLV或TS等流媒体格式。这样,能够减小对网络带宽等的需求,提高传输速率,保证实时性。In some embodiments, the central device 110 may send the composite video to the client device 130 after coding, compressing, encapsulating, and so on. Exemplarily, video compression technologies such as H.264 may be used for encoding and compression. Exemplarily, the video may be encapsulated into a streaming media format such as FLV or TS. In this way, the demand for network bandwidth and the like can be reduced, the transmission rate can be increased, and real-time performance can be guaranteed.
相应地,客户端设备130可以通过解封装、解码解压缩等操作确定复合视频。在一些实施例中,客户端设备130可以通过从直播平台服务器拉流以获取经封装的FLV或TS格式的视频数据,随后可以通过解析等得到编码压缩后的视频数据。进一步地,客户端设备130可以通过解码操作以还原出复合视频。Correspondingly, the client device 130 may determine the composite video through operations such as decapsulation, decoding and decompression. In some embodiments, the client device 130 can obtain the encapsulated video data in FLV or TS format by pulling the stream from the live broadcast platform server, and then obtain encoded and compressed video data through parsing and the like. Further, the client device 130 can perform a decoding operation to restore the composite video.
在过程300中,客户端设备130确定330待呈现视频,其中待呈现视频与至少一个拍摄设备122相关联。进一步地,客户端设备130呈现340该待呈现视频。In process 300 , client device 130 determines 330 a video to present, wherein the video to present is associated with at least one capture device 122 . Further, the client device 130 presents 340 the video to be presented.
本公开的实施例中,呈现视频可以是指将视频的帧进行逐帧显示,或者可以理解为播放视频。In the embodiments of the present disclosure, presenting a video may refer to displaying video frames frame by frame, or may be understood as playing a video.
在一些实施例中,待呈现视频可以为复合视频。由于复合视频的一帧包括由分布式拍摄系统中各个拍摄设备拍摄的图像,因此,可以在客户端设备130处同时呈现出由多个拍摄设备拍摄的关于目标物体202的图像。这样,客户端设备130处的用户可以同时看到目标物体202各个角度的图像,进而能够便于用户进行后续选择,例如查看针对哪个拍摄设备的视频。In some embodiments, the video to be presented may be composite video. Since one frame of the composite video includes images captured by various capture devices in the distributed capture system, the client device 130 can simultaneously present images about the target object 202 captured by multiple capture devices. In this way, the user at the client device 130 can see the images of the target object 202 from various angles at the same time, which can facilitate the user to make subsequent selections, for example, which shooting device to view the video for.
在一些实施例中,待呈现视频可以为分布式拍摄系统中的特定拍摄设备拍摄的视频。具体地,可以从复合视频的每一帧中分离出由特定拍摄设备拍摄的部分,从而确定出待呈现视频。本公开的实施例中,特定拍摄设备可以是以下任一:位于目标位置处的拍摄设备,客户端设备130的用户所指定的拍摄设备等。In some embodiments, the video to be presented may be a video shot by a specific shooting device in the distributed shooting system. Specifically, the part shot by a specific shooting device may be separated from each frame of the composite video, so as to determine the video to be presented. In the embodiment of the present disclosure, the specific shooting device may be any of the following: a shooting device located at the target location, a shooting device designated by the user of the client device 130 , and the like.
在一些实施例中,目标位置可以是中心设备110,待呈现视频可以是中心设备110对应的拍摄设备拍摄的视频。示例性地,中心设备110对应的拍摄设备可以是中心设备110包括的拍摄设备。In some embodiments, the target location may be the central device 110 , and the video to be presented may be a video shot by a shooting device corresponding to the central device 110 . Exemplarily, the photographing device corresponding to the central device 110 may be a photographing device included in the central device 110 .
为了简洁,可以将中心设备110对应的拍摄设备称为中心拍摄设备。在一些实施例中,客户端设备130可以基于分布式拍摄系统中多个拍摄设备的数目(假设为M)以及中心拍摄设备的标识(假设为p),从复合视频中分离出与中心拍摄设备对应的待呈现视频。For brevity, the photographing device corresponding to the central device 110 may be referred to as a central photographing device. In some embodiments, the client device 130 can separate the central shooting device from the composite video based on the number of multiple shooting devices in the distributed shooting system (assumed to be M) and the identity of the central shooting device (assumed to be p). The corresponding video to be presented.
举例来讲,如果复合视频的确定方式是类似于如图5所示的拼接方式。那么,针对复合视频的任一帧(假设为第i帧),可以从复合视频的第i帧中截取宽度位于[(p-1)×w,p×w]的部分,作为待呈现视频的第i帧。再举例来讲,可以将复合视频的第i帧拆分为M张图像,即恢复出M个拍摄设备拍摄的图像,然后再从M张图像中确定出由中心拍摄设备所拍摄的那一张图像。For example, if the composite video is determined in a splicing manner similar to that shown in FIG. 5 . Then, for any frame of the composite video (assumed to be the i-th frame), the part whose width is [(p-1)×w, p×w] can be intercepted from the i-th frame of the composite video, as the video to be presented frame i. For another example, the i-th frame of the composite video can be split into M images, that is, the images captured by M shooting devices can be restored, and then the one shot by the central shooting device can be determined from the M images image.
由于中心设备110一般为主播操作的电子设备,通过此方式,能够使得客户端设备130处的呈现视频与主播所交互的中心设备110上的内容一致。尤其是在主播针对于目标物体202进行语音介绍时,能够使得用户及时查看主播所介绍的具体细节。Since the central device 110 is generally an electronic device operated by the host, in this way, the presentation video at the client device 130 can be consistent with the content on the central device 110 interacted by the host. Especially when the host makes a speech introduction on the target object 202, the user can check the specific details introduced by the host in time.
在一些实施例中,目标位置可以是多个拍摄设备的中间位置,待呈现视频可以是分布式拍摄系统中位于中间位置处的拍摄设备拍摄的视频。假设分布式拍摄系统中多个拍摄设备的数目为M。那么如果M为奇数,位于中间位置处的拍摄设备的编号为(M+1)/2。如果M为偶数,位于中间位置处的拍摄设备的编号可以为M/2或者M/2+1。In some embodiments, the target location may be an intermediate location of multiple shooting devices, and the video to be presented may be a video shot by a shooting device at an intermediate location in a distributed shooting system. Assume that the number of multiple shooting devices in the distributed shooting system is M. Then if M is an odd number, the number of the shooting device at the middle position is (M+1)/2. If M is an even number, the number of the photographing device at the middle position may be M/2 or M/2+1.
通过这种方式,能够由客户端设备130处的用户查看到目标物体202的正面的图像,使得用户能够查看到目标物体202的更多细节。In this way, the image of the front face of the target object 202 can be viewed by the user at the client device 130 , so that the user can view more details of the target object 202 .
在一些实施例中,待呈现视频可以是用户指定的拍摄设备拍摄的视频。具体的,客户端设备130可以接收用户的输入指令,该输入指令可以指示多个拍摄设备中的哪个拍摄设备。In some embodiments, the video to be presented may be a video shot by a user-specified shooting device. Specifically, the client device 130 may receive an input instruction from the user, and the input instruction may indicate which shooting device among the multiple shooting devices.
举例而言,用户可以输入拍摄设备编号,如“2”,从而客户端设备130能够获取该输入指令。举例而言,在客户端设备130正在播放某个拍摄设备(假设编号为n1)拍摄的视频的过程中,用户可以通过左滑或者右滑来确定拍摄设备编号。例如,左滑指示拍摄设备编号减一,即指定拍摄设备编号为n1-1。例如,右滑指示拍摄设备编号加一,即指定拍摄设备编号为n1+1。For example, the user may input a shooting device number, such as "2", so that the client device 130 can obtain the input instruction. For example, when the client device 130 is playing a video shot by a certain shooting device (assume that the number is n1), the user can swipe left or right to determine the number of the shooting device. For example, sliding to the left indicates that the shooting device number is reduced by one, that is, the designated shooting device number is n1-1. For example, sliding right indicates that the shooting device number is increased by one, that is, the designated shooting device number is n1+1.
如此,本公开的实施例中,客户端设备130处的用户可以通过输入指令来确定要呈现的是哪个拍摄设备的视频,实现了在客户端设备130上呈现的视频的切换,提高了用户的自主性,更能满足客户的需求。这样,客户端设备130上呈现的视频不需要与主播的电子设备上的视频一致,不再是被动的视频接收方,相反,能够由用户自主选择感兴趣的视频,同时不会影响主播的电子设备和其他的客户端设备130。In this way, in the embodiment of the present disclosure, the user at the client device 130 can determine which camera's video is to be presented by inputting instructions, which realizes the switching of the video presented on the client device 130 and improves the user's convenience. Autonomy, better able to meet the needs of customers. In this way, the video presented on the client device 130 does not need to be consistent with the video on the anchor’s electronic device, and it is no longer a passive video receiver. On the contrary, the user can independently select the video of interest without affecting the anchor’s electronic device. devices and other client devices 130 .
另外,本公开的实施例对中心设备110上所呈现的视频不作限定,例如可以是中心拍摄设备拍摄的视频。In addition, the embodiment of the present disclosure does not limit the video presented on the central device 110, for example, it may be a video captured by the central shooting device.
可选地或附加地,客户端设备130可以接收350用户的环视查看操作。进一步地,客户端设备130可以呈现360环视图像序列。Alternatively or additionally, the client device 130 may receive 350 a user's look-around operation. Further, the client device 130 may present a sequence of 360 surround view images.
在一些实施例中,用户可以点击客户端设备130的界面上的特定区域以执行环视查看操作,例如点击特定区域上的“环视”按钮。在一些实施例中,用户可以在客户端设备130的界面上操作特定手势以执行环视查看操作,例如特定手势为画圆圈或半弧形。In some embodiments, the user can click a specific area on the interface of the client device 130 to perform a look-around operation, for example, click a "look around" button on the specific area. In some embodiments, the user can operate a specific gesture on the interface of the client device 130 to perform a look-around operation, for example, the specific gesture is drawing a circle or a semi-arc.
图6示出了根据本公开的一些实施例的呈现环视图像序列的过程600的示意流程图。FIG. 6 shows a schematic flowchart of a process 600 of presenting a sequence of surround-view images according to some embodiments of the present disclosure.
在框610,客户端设备130响应于环视查看操作,确定与待呈现视频的当前帧对应的复合视频的当前帧。At block 610, the client device 130 determines a current frame of the composite video corresponding to the current frame of the video to be presented in response to the look-around viewing operation.
具体地,假设待呈现视频的当前帧为第t帧,那么可以获取复合视频的第t帧,可理解,该复合视频的第t帧包括由分布式拍摄系统中的每个拍摄设备所拍摄的图像。Specifically, assuming that the current frame of the video to be presented is the tth frame, then the tth frame of the composite video can be obtained. It can be understood that the tth frame of the composite video includes the images taken by each shooting device in the distributed shooting system image.
在框620,客户端设备130将复合视频的当前帧拆分为多张图像。At block 620, the client device 130 splits the current frame of the composite video into multiple images.
具体地,可以基于多个拍摄设备的数目进行拆分,也就是说多张图像的数目等于多个拍摄设备的数目。在一些实施例中,多张图像是分别由多个拍摄设备所拍摄的图像。Specifically, splitting may be performed based on the number of multiple shooting devices, that is to say, the number of multiple images is equal to the number of multiple shooting devices. In some embodiments, the multiple images are images captured by multiple capturing devices respectively.
在框630,客户端设备130基于多张图像获得环视图像序列。At block 630, the client device 130 obtains a sequence of surround view images based on the plurality of images.
在一些实施例中,可以将多张图像按照拍摄设备的位置进行排序以获得环视图像序列。也就是说,可以按照多个拍摄设备的位置顺序,将多张图像顺序排列以获得环视图像序列。举例而言,如图2所示的场景,拍摄设备122-i拍摄的图像位于该环视图像序列的第i个位置,i为1至7中任一值。In some embodiments, the multiple images may be sorted according to the position of the shooting device to obtain a sequence of surround-view images. That is to say, multiple images may be arranged sequentially according to the sequence of positions of multiple shooting devices to obtain a sequence of surround-view images. For example, in the scene shown in FIG. 2 , the image captured by the shooting device 122 - i is located at the ith position of the look-around image sequence, and i is any value from 1 to 7.
在一些实施例中,可以按照多个拍摄设备的位置顺序将多张图像顺序排列,并在每两张相邻的图像之间插入至少一帧,以形成环视图像序列。可理解,在该实施例中,环视图像序列中的图像数目多个拍摄设备的数目。In some embodiments, the multiple images may be arranged sequentially according to the position sequence of the multiple shooting devices, and at least one frame may be inserted between every two adjacent images to form a look-around image sequence. It can be understood that, in this embodiment, the number of images in the look-around image sequence is greater than the number of shooting devices.
具体的,可以通过插帧操作在相邻的两张图像之间插入至少一帧。插入的至少一帧可以被称为虚拟帧或中间帧,且本公开的实施例对插帧的方式不作限定。插帧也可以被称为补帧或动画补帧,可以通过局部插值等算法来获得虚拟帧。如此通过插帧处理,在相邻两张图像之间插入虚拟帧,能够确保相邻两张图像之间的图像变化的连贯性。Specifically, at least one frame may be inserted between two adjacent images through a frame insertion operation. The inserted at least one frame may be referred to as a virtual frame or an intermediate frame, and embodiments of the present disclosure do not limit the manner of frame insertion. Frame interpolation can also be called supplementary frame or animation supplementary frame, and virtual frames can be obtained through algorithms such as local interpolation. In this way, through the frame insertion process, a virtual frame is inserted between two adjacent images, which can ensure the continuity of image changes between the two adjacent images.
举例来说,假设当前帧为第t帧,多个拍摄设备的数目为m,且多张图像是由多个拍摄设备分别拍摄的图像,表示为f1t、f2t、f3t、…、fmt。如图7所示,可以在每两张图像之间插入4个虚拟帧,从而得到环视图像序列700。这样,通过插帧之后得到的环视图像序列包括的图像数目为:m+4×(m-1)。For example, assuming that the current frame is the tth frame, the number of multiple shooting devices is m, and the multiple images are images captured by the multiple shooting devices, denoted as f1t, f2t, f3t, . . . , fmt. As shown in FIG. 7 , four virtual frames may be inserted between every two images, so as to obtain a look-around image sequence 700 . In this way, the number of images included in the surround-view image sequence obtained after frame interpolation is: m+4×(m-1).
如此,能够扩充环视图像序列中的图像数目,使得在之后呈现环视图像序列时更加流畅。In this way, the number of images in the look-around image sequence can be expanded, so that the look-around image sequence can be presented more smoothly later.
应注意,本公开的实施例中对插帧处理时所插入的虚拟帧的数目不作限定。在一些实施例中,每相邻两帧之间插入的虚拟帧的数目可以是预设值,例如图7中该预设值为4,可理 解该预设值也可以是其他数值。该预设值可以根据相邻两个拍摄设备之间的角度、拍摄设备的数目等进行预先设置。在一些实施例中,不同相邻两帧之间插入的虚拟帧的数目可以相等或不相等,例如在f1t与f2t之间插入的虚拟帧具有第一数目,而在f2t与f3t之间插入的虚拟帧具有第二数目,且第一数目可以等于或不等于第二数目。It should be noted that in the embodiments of the present disclosure, there is no limitation on the number of virtual frames to be inserted during the frame insertion process. In some embodiments, the number of virtual frames inserted between every two adjacent frames may be a preset value, for example, the preset value is 4 in FIG. 7 , and it can be understood that the preset value may also be other values. The preset value can be preset according to the angle between two adjacent shooting devices, the number of shooting devices, and the like. In some embodiments, the number of virtual frames inserted between different adjacent two frames may be equal or unequal, for example, the virtual frames inserted between f1t and f2t have a first number, and the virtual frames inserted between f2t and f3t The virtual frames have a second number, and the first number may or may not be equal to the second number.
在框640,客户端设备130顺次呈现环视图像序列中的每张图像。At block 640, the client device 130 sequentially presents each image in the sequence of look-around images.
在一些实施例中,客户端设备130还可以基于用户的左右滑动操作,来呈现环视图像序列中的对应图像。如此,用户可以根据自己的需求来查看目标物体202的环视效果。In some embodiments, the client device 130 may also present corresponding images in the look-around image sequence based on the user's left and right sliding operations. In this way, the user can view the look-around effect of the target object 202 according to his own needs.
由此可见,本公开的实施例提供了基于分布式拍摄系统的实时环视直播方案,客户端设备能够从中心设备接收复合视频,进而客户端设备可以根据实际需要或用户指令等呈现待呈现视频或者环视效果。以此方式,客户端设备处的用户能够自主地确定呈现的内容,客户端设备不再是单一的被动的内容接收方,这样能够极大地提升用户的交互体验。It can be seen that the embodiments of the present disclosure provide a real-time surround view live broadcast solution based on a distributed shooting system. The client device can receive composite video from the central device, and then the client device can present the video to be presented or the video to be presented according to actual needs or user instructions. Surround effect. In this way, the user at the client device can independently determine the presented content, and the client device is no longer a single passive content receiver, which can greatly improve the user's interactive experience.
图8示出了根据本公开的一些实施例的视频呈现过程800的示意流程图。过程800可以由如图1所示的客户端设备130执行。FIG. 8 shows a schematic flowchart of a video rendering process 800 according to some embodiments of the present disclosure. Process 800 may be performed by client device 130 as shown in FIG. 1 .
在框810,客户端设备130从中心设备接收复合视频,该复合视频的第i帧是基于分布式拍摄系统中多个拍摄设备在同一时刻各自拍摄的视频的第i帧而获得的,i为任意正整数。在框820,客户端设备130基于复合视频确定待呈现视频,该待呈现视频与多个拍摄设备中的至少一个拍摄设备相关联。在框830,客户端设备130呈现该待呈现视频。In block 810, the client device 130 receives the composite video from the central device, the i-th frame of the composite video is obtained based on the i-th frame of the video captured by multiple shooting devices in the distributed shooting system at the same time, i is Any positive integer. At block 820, the client device 130 determines a video to present based on the composite video, the video to present being associated with at least one capture device of the plurality of capture devices. At block 830, client device 130 presents the video to be presented.
在一些实施例中,复合视频的第i帧是通过将多个拍摄设备在同一时刻各自拍摄的第i帧拼接而获得的。举例而言,在同一时刻,多个拍摄设备分别拍摄得到对应的多帧,那么可以是由中心设备将对应的多帧进行拼接,并将拼接后的帧作为复合视频中的与该时刻对应的帧。In some embodiments, the i-th frame of the composite video is obtained by splicing the i-th frames captured by multiple shooting devices at the same time. For example, at the same moment, multiple shooting devices capture corresponding multi-frames, then the central device may splice the corresponding multi-frames, and use the spliced frames as the composite video corresponding to the moment. frame.
在一些实施例中,客户端设备130基于复合视频确定待呈现视频可以包括通过下述过程确定所述待呈现视频的每一帧:客户端设备130从复合视频的第i帧中确定由目标拍摄设备拍摄的视频的第i帧,目标拍摄设备为多个拍摄设备中位于目标位置处的拍摄设备;以及将目标拍摄设备拍摄的视频的第i帧确定待呈现视频的第i帧。可选地,目标位置可以是多个拍摄设备的中间位置。In some embodiments, the client device 130 determining the video to be presented based on the composite video may include determining each frame of the video to be presented through the following process: the client device 130 determines from the ith frame of the composite video The i-th frame of the video shot by the device, the target shooting device is the shooting device at the target position among the multiple shooting devices; and the i-th frame of the video shot by the target shooting device is determined as the i-th frame of the video to be presented. Optionally, the target location may be an intermediate location of multiple shooting devices.
在一些实施例中,客户端设备130基于复合视频确定待呈现视频包括:客户端设备130接收用户输入指令,用户输入指令指示目标拍摄设备;以及通过下述过程确定所述待呈现视频的每一帧:从复合视频的第i帧中确定由目标拍摄设备拍摄的视频的第i帧;以及将目标拍摄设备拍摄的视频的第i帧确定待呈现视频的第i帧。这样,通过将i依次取值1、2、…,可以逐帧地得到待呈现视频的各帧。可选地,用户的输入指令可以是基于用户在界面上的滑动操作的,滑动操作可以是左滑或者右滑以分别指示通过位置左移或右移来确定目标拍摄设备。In some embodiments, the client device 130 determining the video to be presented based on the composite video includes: the client device 130 receives a user input instruction, and the user input instruction indicates the target shooting device; Frame: determining the i-th frame of the video shot by the target shooting device from the i-th frame of the composite video; and determining the i-th frame of the video to be presented from the i-th frame of the video shot by the target shooting device. In this way, each frame of the video to be presented can be obtained frame by frame by sequentially setting i as 1, 2, . . . Optionally, the user's input instruction may be based on the user's sliding operation on the interface, and the sliding operation may be left sliding or right sliding to indicate that the target shooting device is determined by moving the position left or right respectively.
举例而言,对于复合视频中的任一帧,如第i帧,可以从该复合视频的第i帧中确定由目标拍摄设备所拍摄的视频的第i帧作为待呈现视频的第i帧。一般地,待呈现视频的第i帧的图像尺寸小于复合视频的第i帧的图像尺寸。For example, for any frame in the composite video, such as the i-th frame, the i-th frame of the video captured by the target shooting device may be determined from the i-th frame of the composite video as the i-th frame of the video to be presented. Generally, the image size of the i-th frame of the video to be presented is smaller than the image size of the i-th frame of the composite video.
可选地或附加地,如图8所示,在框840,客户端设备130接收用户针对待呈现视频的当前帧的环视查看操作。在框850,客户端设备130响应于环视查看操作,呈现与待呈现视频的当前帧对应的环视图像序列。Optionally or additionally, as shown in FIG. 8 , at block 840 , the client device 130 receives a user's look-around operation on the current frame of the video to be presented. At block 850, client device 130 presents a sequence of look-around images corresponding to a current frame of video to be presented in response to a look-around viewing operation.
在一些实施例中,客户端设备130响应于环视查看操作呈现环视图像序列可以包括:客 户端设备130响应于环视查看操作,从复合视频中确定与待呈现视频的当前帧对应的帧;将所确定的复合视频中与待呈现视频的当前帧对应的帧拆分为与多个拍摄设备分别对应的多个图像;基于多个图像获得环视图像序列;以及呈现该环视图像序列。示例性地,多个图像的数目等于多个拍摄设备的数目。In some embodiments, the client device 130 presenting the look-around image sequence in response to the look-around viewing operation may include: the client device 130 determining from the composite video a frame corresponding to the current frame of the video to be presented in response to the look-around viewing operation; The determined composite video frame corresponding to the current frame of the video to be presented is divided into multiple images respectively corresponding to multiple shooting devices; obtaining a surround view image sequence based on the multiple images; and presenting the surround view image sequence. Exemplarily, the number of multiple images is equal to the number of multiple shooting devices.
在一些实施例中,客户端设备130基于多个图像获得环视图像序列可以包括:客户端设备130按照多个拍摄设备的位置顺序,排列多个图像以获得环视图像序列。In some embodiments, the client device 130 obtaining the sequence of surround-view images based on the multiple images may include: the client device 130 arranging the multiple images according to the order of positions of the multiple shooting devices to obtain the sequence of surround-view images.
在一些实施例中,客户端设备130基于多个图像获得环视图像序列包括:客户端设备130按照多个拍摄设备的位置顺序,排列多个图像;通过插帧操作在多个图像的每两个相邻图像之间插入中间帧,以获得环视图像序列。In some embodiments, the client device 130 obtaining the surround-view image sequence based on the multiple images includes: the client device 130 arranges the multiple images according to the order of the positions of the multiple shooting devices; Intermediate frames are inserted between adjacent images to obtain a sequence of look-around images.
图9示出了根据本公开的一些实施例的视频呈现过程900的示意流程图。过程900可以由如图1所示的中心设备110执行。FIG. 9 shows a schematic flowchart of a video rendering process 900 according to some embodiments of the present disclosure. The process 900 may be executed by the central device 110 as shown in FIG. 1 .
在框910,中心设备110接收分布式拍摄系统中多个拍摄设备各自所拍摄的视频。在框920,中心设备110基于多个拍摄设备各自所拍摄的视频,获得复合视频,复合视频的第i帧是基于多个拍摄设备在同一时刻各自所拍摄的视频的第i帧而获得的,i为任意正整数。在框930,中心设备110将复合视频发送到客户端设备。In block 910, the central device 110 receives the videos captured by each of the multiple capture devices in the distributed capture system. In block 920, the central device 110 obtains a composite video based on the video captured by each of the multiple shooting devices, and the i-th frame of the composite video is obtained based on the i-th frame of the video captured by the multiple shooting devices at the same time, i is any positive integer. At block 930, the central device 110 sends the composite video to the client devices.
在一些实施例中,中心设备110获得复合视频可以包括通过下述过程确定所述复合视频的每一帧:中心设备110将多个拍摄设备在同一时刻各自拍摄的视频的第i帧拼接,以获得复合视频的第i帧。举例而言,在同一时刻,多个拍摄设备分别拍摄得到对应的多帧,那么中心设备110可以将对应的多帧进行拼接,并将拼接后的帧作为复合视频中的与该时刻对应的帧。In some embodiments, obtaining the composite video by the central device 110 may include determining each frame of the composite video through the following process: the central device 110 splices the i-th frame of the video captured by multiple shooting devices at the same time, to Get the i-th frame of the composite video. For example, at the same moment, multiple shooting devices capture corresponding multi-frames respectively, then the central device 110 can splice the corresponding multi-frames, and use the spliced frames as the frames corresponding to the moment in the composite video .
在一些实施例中,还可以包括:在中心设备110处呈现分布式拍摄系统中的特定拍摄设备拍摄的视频。举例而言,中心设备110获取分布式拍摄系统中多个拍摄设备各自所拍摄的视频之后,呈现目标拍摄设备所拍摄的视频。举例而言,目标拍摄设备可以是分布式拍摄系统中与用户进行交互的设备,或者目标拍摄设备可以是分布式拍摄系统中位于中间位置处的设备。In some embodiments, it may further include: presenting at the central device 110 a video captured by a specific capture device in the distributed capture system. For example, the central device 110 presents the video captured by the target capture device after acquiring the videos captured by the multiple capture devices in the distributed capture system. For example, the target photographing device may be a device interacting with the user in the distributed photographing system, or the target photographing device may be a device at an intermediate position in the distributed photographing system.
图10示出了根据本公开的一些实施例的用于视频呈现的装置1000的示意框图。装置1000可以被实现为或者被包括在图1的客户端设备130中。Fig. 10 shows a schematic block diagram of an apparatus 1000 for video presentation according to some embodiments of the present disclosure. The apparatus 1000 may be implemented as or included in the client device 130 of FIG. 1 .
装置1000可以包括多个模块,以用于执行如图8中所讨论的过程800中的对应步骤。如图10所示,装置1000包括接收模块1010、确定模块1020和呈现模块1030。接收模块1010被配置为从中心设备接收复合视频,复合视频的第i帧是基于分布式拍摄系统中多个拍摄设备在同一时刻各自拍摄的视频的第i帧而获得的,i为任意正整数。确定模块1020被配置为基于复合视频确定待呈现视频,待呈现视频与多个拍摄设备中的至少一个拍摄设备相关联。呈现模块1030被配置为呈现该待呈现视频。 Apparatus 1000 may include a plurality of modules for performing corresponding steps in process 800 as discussed in FIG. 8 . As shown in FIG. 10 , the device 1000 includes a receiving module 1010 , a determining module 1020 and a presenting module 1030 . The receiving module 1010 is configured to receive the composite video from the central device, the i-th frame of the composite video is obtained based on the i-th frame of the video captured by multiple shooting devices in the distributed shooting system at the same time, and i is any positive integer . The determining module 1020 is configured to determine a video to be presented based on the composite video, and the video to be presented is associated with at least one capturing device among the plurality of capturing devices. The presentation module 1030 is configured to present the video to be presented.
在一些实施例中,复合视频的第i帧是通过将多个拍摄设备在同一时刻各自拍摄的第i帧拼接而获得的。In some embodiments, the i-th frame of the composite video is obtained by splicing the i-th frames captured by multiple shooting devices at the same time.
在一些实施例中,确定模块1020可以被配置为通过下述过程确定所述待呈现视频的每一帧:从复合视频的第i帧中确定由目标拍摄设备拍摄的视频的第i帧,目标拍摄设备为多个拍摄设备中位于目标位置处的拍摄设备;以及将目标拍摄设备拍摄的视频的第i帧确定待呈现视频的第i帧。In some embodiments, the determination module 1020 may be configured to determine each frame of the video to be presented through the following process: determine the i-th frame of the video captured by the target shooting device from the i-th frame of the composite video, the target The shooting device is the shooting device located at the target position among the multiple shooting devices; and the i-th frame of the video captured by the target shooting device is determined to be the i-th frame of the video to be presented.
在一些实施例中,接收模块1010还可以被配置为接收用户输入指令,用户输入指令指示目标拍摄设备。确定模块1020可以被配置为通过下述过程确定所述待呈现视频的每一帧:从复合视频的第i帧中确定由目标拍摄设备拍摄的视频的第i帧;以及将目标拍摄设备拍摄的视频的第i帧确定待呈现视频的第i帧。In some embodiments, the receiving module 1010 may also be configured to receive a user input instruction, where the user input instruction indicates the target shooting device. The determining module 1020 may be configured to determine each frame of the video to be presented through the following process: determine the i-th frame of the video shot by the target shooting device from the i-th frame of the composite video; The i-th frame of the video determines the i-th frame of the video to be rendered.
在一些实施例中,接收模块1010还可以被配置为接收用户针对待呈现视频的当前帧的环视查看操作。呈现模块1030还可以被配置为响应于环视查看操作,呈现与待呈现视频的当前帧对应的环视图像序列。In some embodiments, the receiving module 1010 may also be configured to receive a user's look-around operation on the current frame of the video to be presented. The presenting module 1030 may also be configured to present a surround view image sequence corresponding to the current frame of the video to be presented in response to the surround view operation.
在一些实施例中,确定模块1020可以被配置为:响应于环视查看操作,从复合视频中确定与待呈现视频的当前帧对应的帧;将所确定的复合视频的帧拆分为与多个拍摄设备分别对应的多个图像;以及基于多个图像获得环视图像序列。In some embodiments, the determining module 1020 may be configured to: in response to the look-around operation, determine from the composite video a frame corresponding to the current frame of the video to be presented; split the determined frame of the composite video into multiple taking a plurality of images respectively corresponding to the devices; and obtaining a look-around image sequence based on the plurality of images.
在一些实施例中,多个图像的数目等于多个拍摄设备的数目。In some embodiments, the number of the plurality of images is equal to the number of the plurality of photographing devices.
在一些实施例中,确定模块1020可以被配置为:按照多个拍摄设备的位置顺序,排列多个图像以获得环视图像序列。In some embodiments, the determining module 1020 may be configured to: arrange the multiple images according to the sequence of positions of the multiple shooting devices to obtain a sequence of surround-view images.
在一些实施例中,确定模块1020可以被配置为:按照多个拍摄设备的位置顺序,排列多个图像;以及通过插帧操作在多个图像的每两个相邻图像之间插入中间帧,以获得环视图像序列。环视图像序列中的图像数目大于多个拍摄设备的数目。In some embodiments, the determination module 1020 may be configured to: arrange the multiple images according to the position order of the multiple shooting devices; and insert an intermediate frame between every two adjacent images of the multiple images through a frame insertion operation, to obtain a look-around image sequence. The number of images in the look-around image sequence is greater than the number of multiple shooting devices.
示例性地,图10中的装置1000可以被实现为客户端设备130,或者可以被实现为客户端设备130中的芯片或芯片系统,本公开的实施例对此不限定。图10中的装置1000能够用于实现上述结合图3至图9中客户端设备130所述的各个过程,为了简洁,这里不再赘述。Exemplarily, the apparatus 1000 in FIG. 10 may be implemented as the client device 130, or may be implemented as a chip or chip system in the client device 130, which is not limited by the embodiment of the present disclosure. The apparatus 1000 in FIG. 10 can be used to implement the processes described above in conjunction with the client device 130 in FIG. 3 to FIG. 9 , and details are not repeated here for brevity.
图11示出了根据本公开的一些实施例的用于视频呈现的装置1100的另一示意框图。装置1100可以被实现为或者被包括在图1的中心设备110中。Fig. 11 shows another schematic block diagram of an apparatus 1100 for video presentation according to some embodiments of the present disclosure. The apparatus 1100 may be implemented as or included in the central device 110 in FIG. 1 .
装置1100可以包括多个模块,以用于执行如图9中所讨论的过程900中的对应步骤。如图11所示,装置1100包括接收模块1110、确定模块1120和发送模块1130。接收模块1110被配置为接收分布式拍摄系统中多个拍摄设备各自所拍摄的视频。确定模块1120被配置为基于多个拍摄设备各自所拍摄的视频,获得复合视频,复合视频的第i帧是基于多个拍摄设备在同一时刻各自所拍摄的视频的第i帧而获得的,i为任意正整数。发送模块1130被配置为将复合视频发送到客户端设备。 Apparatus 1100 may include a plurality of modules for performing corresponding steps in process 900 as discussed in FIG. 9 . As shown in FIG. 11 , the device 1100 includes a receiving module 1110 , a determining module 1120 and a sending module 1130 . The receiving module 1110 is configured to receive videos captured by multiple capturing devices in the distributed capturing system. The determining module 1120 is configured to obtain a composite video based on the video captured by each of the multiple shooting devices, the i-th frame of the composite video is obtained based on the i-th frame of the video captured by the multiple shooting devices at the same time, i is any positive integer. The sending module 1130 is configured to send the composite video to the client device.
在一些实施例中,确定模块1120可以被配置为通过下述过程确定所述复合视频的每一帧“将多个拍摄设备在同一时刻各自拍摄的视频的第i帧拼接,以获得复合视频的第i帧。In some embodiments, the determination module 1120 may be configured to determine each frame of the composite video through the following process: "splicing the i-th frame of the video captured by multiple shooting devices at the same time to obtain the composite video frame i.
在一些实施例中,装置1100还可以包括呈现模块,被配置为呈现分布式拍摄系统中的特定拍摄设备拍摄的视频。In some embodiments, the apparatus 1100 may further include a presentation module configured to present videos captured by specific capture devices in the distributed capture system.
示例性地,图11中的装置1100可以被实现为中心设备110,或者可以被实现为中心设备110中的芯片或芯片系统,本公开的实施例对此不限定。图11中的装置1100能够用于实现上述结合图3至图9中中心设备110所述的各个过程,为了简洁,这里不再赘述。Exemplarily, the apparatus 1100 in FIG. 11 may be implemented as the central device 110 , or may be implemented as a chip or chip system in the central device 110 , which is not limited by the embodiments of the present disclosure. The apparatus 1100 in FIG. 11 can be used to implement the processes described above in conjunction with the central device 110 in FIG. 3 to FIG. 9 , and for the sake of brevity, details are not repeated here.
图12出了可以用来实施本公开的实施例的示例设备1200的示意性框图。设备1200可以被实现为或者被包括在图1的客户端设备130中,或者设备1200可以被实现为或者被包括在图1的中心设备110中。FIG. 12 shows a schematic block diagram of an example device 1200 that may be used to implement embodiments of the present disclosure. The device 1200 may be implemented as or included in the client device 130 of FIG. 1 , or the device 1200 may be implemented as or included in the central device 110 of FIG. 1 .
如图所示,设备1200包括中央处理单元(Central Processing Unit,CPU)1201、只读存储器(Read-Only Memory,ROM)1202以及随机存取存储器(Random Access Memory,RAM) 1203。CPU 1201可以根据存储在RAM 1202和/或RAM 1203中的计算机程序指令或者从存储单元1208加载到ROM 1202和/或RAM 1203中的计算机程序指令,来执行各种适当的动作和处理。在ROM 1202和/或RAM 1203中,还可存储设备1200操作所需的各种程序和数据。CPU 1201和ROM 1202和/或RAM 1203通过总线1204彼此相连。输入/输出(I/O)接口1205也连接至总线1204。As shown in the figure, the device 1200 includes a central processing unit (Central Processing Unit, CPU) 1201, a read-only memory (Read-Only Memory, ROM) 1202, and a random access memory (Random Access Memory, RAM) 1203. The CPU 1201 can perform various appropriate actions and processes according to computer program instructions stored in the RAM 1202 and/or RAM 1203 or loaded from the storage unit 1208 into the ROM 1202 and/or RAM 1203. In the ROM 1202 and/or the RAM 1203, various programs and data required for the operation of the device 1200 can also be stored. The CPU 1201 and the ROM 1202 and/or RAM 1203 are connected to each other via a bus 1204. An input/output (I/O) interface 1205 is also connected to the bus 1204 .
设备1200中的多个部件连接至I/O接口1205,包括:输入单元1206,例如键盘、鼠标等;输出单元1207,例如各种类型的显示器、扬声器等;存储单元1208,例如磁盘、光盘等;以及通信单元1209,例如网卡、调制解调器、无线通信收发机等。通信单元1209允许设备1200通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据。Multiple components in the device 1200 are connected to the I/O interface 1205, including: an input unit 1206, such as a keyboard, a mouse, etc.; an output unit 1207, such as various types of displays, speakers, etc.; a storage unit 1208, such as a magnetic disk, an optical disk, etc. ; and a communication unit 1209, such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 1209 allows the device 1200 to exchange information/data with other devices through a computer network such as the Internet and/or various telecommunication networks.
CPU 1201可以是各种具有处理和计算能力的通用和/或专用处理组件。可以被实现为的一些示例包括但不限于图形处理单元(Graphics Processing Unit,GPU)、各种专用的人工智能(Artificial Intelligence,AI)计算芯片、各种运行机器学习模型算法的计算单元、数字信号处理器(Digital Signal Processor,DSP)、以及任何适当的处理器、控制器、微控制器等,相应地可以被称为计算单元。CPU 1201执行上文所描述的各个方法和处理,例如过程800或900。例如,在一些实施例中,过程800或900可被实现为计算机软件程序,其被有形地包含于计算机可读介质,例如存储单元1208。在一些实施例中,计算机程序的部分或者全部可以经由ROM 1202和/或RAM 1203和/或通信单元1209而被载入和/或安装到设备1200上。当计算机程序加载到ROM 1202和/或RAM 1203并由CPU 1201执行时,可以执行上文描述的过程800或900的一个或多个步骤。备选地,在其他实施例中,CPU 1201可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行过程800或900。 CPU 1201 may be various general and/or special purpose processing components having processing and computing capabilities. Some examples that can be implemented as include, but are not limited to, Graphics Processing Unit (Graphics Processing Unit, GPU), various dedicated artificial intelligence (Artificial Intelligence, AI) computing chips, various computing units that run machine learning model algorithms, digital signal A processor (Digital Signal Processor, DSP), and any suitable processor, controller, microcontroller, etc., may accordingly be referred to as a computing unit. The CPU 1201 executes the various methods and processes described above, such as the process 800 or 900. For example, in some embodiments, process 800 or 900 may be implemented as a computer software program tangibly embodied on a computer-readable medium, such as storage unit 1208 . In some embodiments, part or all of the computer program may be loaded and/or installed on the device 1200 via the ROM 1202 and/or RAM 1203 and/or the communication unit 1209. When a computer program is loaded into ROM 1202 and/or RAM 1203 and executed by CPU 1201, one or more steps of process 800 or 900 described above may be performed. Alternatively, in other embodiments, the CPU 1201 may be configured to execute the process 800 or 900 in any other suitable manner (eg, by means of firmware).
示例性地,图12中的设备1200可以被实现为电子设备(如客户端设备130或中心设备110),或者可以被实现为电子设备中的芯片或芯片系统,本公开的实施例对此不限定。Exemplarily, the device 1200 in FIG. 12 may be implemented as an electronic device (such as the client device 130 or the central device 110), or may be implemented as a chip or a chip system in an electronic device, and embodiments of the present disclosure do not limited.
本公开的实施例还提供了一种芯片,该芯片可以包括输入接口、输出接口和处理电路。在本公开的实施例中,可以由输入接口和输出接口完成上述信令或数据的交互,由处理电路完成信令或数据信息的生成以及处理。Embodiments of the present disclosure also provide a chip, which may include an input interface, an output interface, and a processing circuit. In the embodiments of the present disclosure, the above signaling or data interaction may be completed by the input interface and the output interface, and the generation and processing of the signaling or data information may be completed by the processing circuit.
本公开的实施例还提供了一种芯片系统,包括处理器,用于支持客户端设备130或中心设备110以实现上述任一实施例中所涉及的功能。在一种可能的设计中,芯片系统还可以包括存储器,用于存储必要的程序指令和数据,当处理器运行该程序指令时,使得安装该芯片系统的设备实现上述任一实施例中所涉及的方法。该芯片系统,可以由芯片构成,也可以包含芯片和其他分立器件。Embodiments of the present disclosure also provide a chip system, including a processor, configured to support the client device 130 or the central device 110 to implement the functions involved in any of the foregoing embodiments. In a possible design, the system-on-a-chip may further include a memory for storing necessary program instructions and data, and when the processor runs the program instructions, the device installed with the system-on-a-chip can implement the program described in any of the above-mentioned embodiments. Methods. The system-on-a-chip may consist of chips, or may include chips and other discrete devices.
本公开的实施例还提供了一种处理器,用于与存储器耦合,存储器存储有指令,当处理器运行所述指令时,使得处理器执行上述任一实施例中涉及客户端设备130或中心设备110的方法和功能。Embodiments of the present disclosure also provide a processor, configured to be coupled with a memory, the memory stores instructions, and when the processor executes the instructions, the processor executes any of the above-mentioned embodiments involving the client device 130 or the center. Methods and Functions of Device 110 .
本公开的实施例还提供了一种包含指令的计算机程序产品,其在计算机上运行时,使得计算机执行上述各实施例中任一实施例中涉及客户端设备130或中心设备110的方法和功能。Embodiments of the present disclosure also provide a computer program product containing instructions, which, when run on a computer, cause the computer to execute the methods and functions related to the client device 130 or the central device 110 in any of the above-mentioned embodiments .
本公开的实施例还提供了一种计算机可读存储介质,其上存储有计算机指令,当处理器运行所述指令时,使得处理器执行上述任一实施例中涉及客户端设备130或中心设备110的方法和功能。Embodiments of the present disclosure also provide a computer-readable storage medium on which computer instructions are stored, and when the processor executes the instructions, the processor executes any of the above-mentioned embodiments involving the client device 130 or the central device. 110 methods and functions.
通常,本公开的各种实施例可以以硬件或专用电路、软件、逻辑或其任何组合来实现。 一些方面可以用硬件实现,而其他方面可以用固件或软件实现,其可以由控制器,微处理器或其他计算设备执行。虽然本公开的实施例的各个方面被示出并描述为框图,流程图或使用一些其他图示表示,但是应当理解,本文描述的框,装置、系统、技术或方法可以实现为,如非限制性示例,硬件、软件、固件、专用电路或逻辑、通用硬件或控制器或其他计算设备,或其某种组合。In general, the various embodiments of the present disclosure may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software, which may be executed by a controller, microprocessor or other computing device. While various aspects of the embodiments of the present disclosure are shown and described as block diagrams, flowcharts, or using some other pictorial representation, it should be understood that the blocks, devices, systems, techniques or methods described herein can be implemented as, without limitation, Exemplary, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controllers or other computing devices, or some combination thereof.
本公开还提供有形地存储在非暂时性计算机可读存储介质上的至少一个计算机程序产品。该计算机程序产品包括计算机可执行指令,例如包括在程序模块中的指令,其在目标的真实或虚拟处理器上的设备中执行,以执行如上参考附图的过程/方法。通常,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、库、对象、类、组件、数据结构等。在各种实施例中,可以根据需要在程序模块之间组合或分割程序模块的功能。用于程序模块的机器可执行指令可以在本地或分布式设备内执行。在分布式设备中,程序模块可以位于本地和远程存储介质中。The present disclosure also provides at least one computer program product tangibly stored on a non-transitory computer-readable storage medium. The computer program product comprises computer-executable instructions, eg included in program modules, which are executed in a device on a real or virtual processor of a target to perform the process/method as above with reference to the accompanying drawings. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. In various embodiments, the functionality of the program modules may be combined or divided as desired among the program modules. Machine-executable instructions for program modules may be executed within local or distributed devices. In a distributed device, program modules may be located in both local and remote storage media.
用于实现本公开的方法的计算机程序代码可以用一种或多种编程语言编写。这些计算机程序代码可以提供给通用计算机、专用计算机或其他可编程的数据处理装置的处理器,使得程序代码在被计算机或其他可编程的数据处理装置执行的时候,引起在流程图和/或框图中规定的功能/操作被实施。程序代码可以完全在计算机上、部分在计算机上、作为独立的软件包、部分在计算机上且部分在远程计算机上或完全在远程计算机或服务器上执行。Computer program codes for implementing the methods of the present disclosure may be written in one or more programming languages. These computer program codes can be provided to processors of general-purpose computers, special-purpose computers, or other programmable data processing devices, so that when the program codes are executed by the computer or other programmable data processing devices, The functions/operations specified in are implemented. The program code may execute entirely on the computer, partly on the computer, as a stand-alone software package, partly on the computer and partly on a remote computer or entirely on the remote computer or server.
在本公开的上下文中,计算机程序代码或者相关数据可以由任意适当载体承载,以使得设备、装置或者处理器能够执行上文描述的各种处理和操作。载体的示例包括信号、计算机可读介质、等等。信号的示例可以包括电、光、无线电、声音或其它形式的传播信号,诸如载波、红外信号等。In the context of the present disclosure, computer program code or related data may be carried by any suitable carrier to enable a device, apparatus or processor to perform the various processes and operations described above. Examples of carriers include signals, computer readable media, and the like. Examples of signals may include electrical, optical, radio, sound, or other forms of propagated signals, such as carrier waves, infrared signals, and the like.
计算机可读介质可以是包含或存储用于或有关于指令执行系统、装置或设备的程序的任何有形介质。计算机可读介质可以是计算机可读信号介质或计算机可读存储介质。计算机可读介质可以包括但不限于电子的、磁的、光学的、电磁的、红外的或半导体系统、装置或设备,或其任意合适的组合。计算机可读存储介质的更详细示例包括带有一根或多根导线的电气连接、便携式计算机磁盘、硬盘、随机存储存取器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或闪存)、光存储设备、磁存储设备,或其任意合适的组合。A computer readable medium may be any tangible medium that contains or stores a program for or related to an instruction execution system, apparatus, or device. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination thereof. More detailed examples of computer-readable storage media include electrical connections with one or more wires, portable computer diskettes, hard disks, random storage access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash), optical storage, magnetic storage, or any suitable combination thereof.
此外,尽管在附图中以特定顺序描述了本公开的方法的操作,但是这并非要求或者暗示必须按照该特定顺序来执行这些操作,或是必须执行全部所示的操作才能实现期望的结果。相反,流程图中描绘的步骤可以改变执行顺序。附加地或备选地,可以省略某些步骤,将多个步骤组合为一个步骤执行,和/或将一个步骤分解为多个步骤执行。还应当注意,根据本公开的两个或更多装置的特征和功能可以在一个装置中具体化。反之,上文描述的一个装置的特征和功能可以进一步划分为由多个装置来具体化。In addition, while operations of methods of the present disclosure are depicted in a particular order in the figures, this does not require or imply that operations must be performed in that particular order, or that all illustrated operations must be performed, to achieve desirable results. Conversely, the steps depicted in the flowcharts may be performed in an altered order. Additionally or alternatively, certain steps may be omitted, multiple steps may be combined into one step for execution, and/or one step may be decomposed into multiple steps for execution. It should also be noted that the features and functions of two or more devices according to the present disclosure may be embodied in one device. Conversely, the features and functions of one device described above may be further divided to be embodied by a plurality of devices.
以上已经描述了本公开的各实现,上述说明是示例性的,并非穷尽的,并且也不限于所公开的各实现。在不偏离所说明的各实现的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在很好地解释各实现的原理、实际应用或对市场中的技术的改进,或者使本技术领域的其他普通技术人员能理解本文公开的各个实现方式。Having described various implementations of the present disclosure, the foregoing description is exemplary, not exhaustive, and is not limited to the disclosed implementations. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described implementations. The choice of terminology used herein aims to well explain the principle of each implementation, practical application or improvement to the technology in the market, or to enable other ordinary skilled persons in the technical field to understand the various implementations disclosed herein.

Claims (13)

  1. 一种视频呈现方法,包括:A video presentation method, comprising:
    客户端设备从中心设备接收复合视频,所述复合视频的第i帧是基于分布式拍摄系统中多个拍摄设备在同一时刻各自拍摄的视频的第i帧而获得的,i为任意正整数;The client device receives the composite video from the central device, and the i-th frame of the composite video is obtained based on the i-th frame of the video captured by multiple shooting devices in the distributed shooting system at the same time, and i is any positive integer;
    所述客户端设备基于所述复合视频确定待呈现视频,所述待呈现视频与所述多个拍摄设备中的至少一个拍摄设备相关联;以及The client device determines a video to be presented based on the composite video, the video to be presented is associated with at least one capture device in the plurality of capture devices; and
    所述客户端设备呈现所述待呈现视频。The client device presents the video to be presented.
  2. 根据权利要求1所述的方法,所述客户端设备基于所述复合视频确定所述待呈现视频包括通过下述过程确定所述待呈现视频的每一帧:The method according to claim 1, wherein the client device determining the video to be presented based on the composite video comprises determining each frame of the video to be presented by the following process:
    所述客户端设备从所述复合视频的第i帧中确定由目标拍摄设备拍摄的视频的第i帧,所述目标拍摄设备为所述多个拍摄设备中位于目标位置处的拍摄设备;以及The client device determines the i-th frame of the video captured by the target shooting device from the i-th frame of the composite video, and the target shooting device is the shooting device at the target position among the plurality of shooting devices; and
    所述客户端设备将所述目标拍摄设备拍摄的视频的第i帧确定为所述待呈现视频的第i帧。The client device determines the i-th frame of the video captured by the target shooting device as the i-th frame of the video to be presented.
  3. 根据权利要求1所述的方法,所述客户端设备基于所述复合视频确定所述待呈现视频包括:According to the method according to claim 1, the client device determining the video to be presented based on the composite video comprises:
    所述客户端设备接收用户输入指令,所述用户输入指令指示目标拍摄设备;The client device receives a user input instruction, and the user input instruction indicates a target shooting device;
    所述客户端设备通过下述过程确定所述待呈现视频的每一帧:从所述复合视频的第i帧中确定由所述目标拍摄设备拍摄的视频的第i帧;以及将所述目标拍摄设备拍摄的视频的第i帧确定为所述待呈现视频的第i帧。The client device determines each frame of the video to be presented through the following process: determining the i-th frame of the video captured by the target shooting device from the i-th frame of the composite video; The i-th frame of the video captured by the shooting device is determined as the i-th frame of the video to be presented.
  4. 根据权利要求1至3中任一项所述的方法,还包括:The method according to any one of claims 1 to 3, further comprising:
    所述客户端设备接收用户针对所述待呈现视频的当前帧的环视查看操作;以及The client device receives a user's look-around operation for the current frame of the video to be presented; and
    所述客户端设备响应于所述环视查看操作,呈现与所述待呈现视频的当前帧对应的环视图像序列。In response to the look-around operation, the client device presents a sequence of look-around images corresponding to the current frame of the video to be presented.
  5. 根据权利要求4所述的方法,其中所述客户端设备基于所述环视查看操作呈现环视图像序列包括:The method of claim 4, wherein the client device presenting the sequence of look-around images based on the look-around viewing operation comprises:
    所述客户端设备响应于所述环视查看操作,从所述复合视频中确定与所述待呈现视频的当前帧对应的帧;The client device determines a frame corresponding to a current frame of the video to be presented from the composite video in response to the look-around operation;
    所述客户端设备将所确定的与所述待呈现视频的当前帧对应的帧拆分为与所述多个拍摄设备分别对应的多个图像;The client device splits the determined frame corresponding to the current frame of the video to be presented into a plurality of images respectively corresponding to the plurality of shooting devices;
    所述客户端设备基于所述多个图像获得所述环视图像序列;以及the client device obtains the sequence of look-around images based on the plurality of images; and
    所述客户端设备呈现所述环视图像序列。The client device presents the sequence of look-around images.
  6. 根据权利要求5所述的方法,其中所述客户端设备基于所述多个图像获得环视图像序列包括:The method of claim 5, wherein said client device obtaining a sequence of surround-view images based on said plurality of images comprises:
    所述客户端设备按照所述多个拍摄设备的位置顺序,排列所述多个图像以获得所述环视图像序列;或者,The client device arranges the plurality of images according to the sequence of positions of the plurality of shooting devices to obtain the sequence of look-around images; or,
    所述客户端设备按照所述多个拍摄设备的位置顺序,排列所述多个图像;以及The client device arranges the plurality of images according to the sequence of positions of the plurality of shooting devices; and
    所述客户端设备在所述多个图像的每两个相邻图像之间插入中间帧,以获得所述环视图像序列。The client device inserts intermediate frames between every two adjacent images of the plurality of images, so as to obtain the look-around image sequence.
  7. 根据权利要求1至6中任一项所述的方法,其中所述复合视频的第i帧是通过将所述 多个拍摄设备在同一时刻各自拍摄的第i帧拼接而获得的。The method according to any one of claims 1 to 6, wherein the i-th frame of the composite video is obtained by splicing the i-th frame of each shooting at the same moment by the plurality of shooting devices.
  8. 一种视频呈现方法,包括:A video presentation method, comprising:
    中心设备接收分布式拍摄系统中多个拍摄设备各自所拍摄的视频;The central device receives the videos shot by multiple shooting devices in the distributed shooting system;
    所述中心设备基于所述多个拍摄设备各自所拍摄的视频,获得复合视频,所述复合视频的第i帧是基于所述多个拍摄设备在同一时刻各自所拍摄的视频的第i帧而获得的,i为任意正整数;以及The central device obtains a composite video based on the video captured by each of the multiple shooting devices, and the i-th frame of the composite video is based on the i-th frame of the video captured by the multiple shooting devices at the same time. Obtained, i is any positive integer; and
    所述中心设备将所述复合视频发送到客户端设备。The central device sends the composite video to client devices.
  9. 根据权利要求8所述的方法,其中所述中心设备获得复合视频包括通过下述过程确定所述复合视频的每一帧:The method according to claim 8, wherein said central device obtaining composite video includes determining each frame of said composite video through the following process:
    所述中心设备将所述多个拍摄设备在同一时刻各自拍摄的视频的第i帧拼接,以获得所述复合视频的第i帧。The central device splices the i-th frame of the videos captured by the plurality of shooting devices at the same time to obtain the i-th frame of the composite video.
  10. 根据权利要求8或9所述的方法,还包括:The method according to claim 8 or 9, further comprising:
    所述中心设备呈现所述分布式拍摄系统中的特定拍摄设备拍摄的视频。The central device presents the video shot by a specific shooting device in the distributed shooting system.
  11. 一种电子设备,包括处理器和存储器,所述存储器上存储有计算机指令,当所述计算机指令被所述处理器执行时,使得所述电子设备执行权利要求1至7中任一项所述的方法或者权利要求8至10中任一项所述的方法。An electronic device, comprising a processor and a memory, and computer instructions are stored on the memory, and when the computer instructions are executed by the processor, the electronic device executes any one of claims 1 to 7. The method or the method described in any one of claims 8 to 10.
  12. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现根据权利要求1至7中任一项所述的方法或者权利要求8至10中任一项所述的方法。A computer-readable storage medium, the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are executed by a processor, the method or rights according to any one of claims 1 to 7 are realized The method described in any one of 8 to 10 is required.
  13. 一种计算机程序产品,所述计算机程序产品上包含计算机可执行指令,所述计算机可执行指令在被执行时实现根据权利要求1至7中任一项或者权利要求8至10中任一项所述的方法。A computer program product having computer-executable instructions embodied thereon, the computer-executable instructions implementing any of claims 1 to 7 or any of claims 8 to 10 when executed. described method.
PCT/CN2022/084913 2021-05-31 2022-04-01 Video presentation method, electronic device, computer storage medium and program product WO2022252797A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202110600072.X 2021-05-31
CN202110600072 2021-05-31
CN202110837120.7A CN115484486A (en) 2021-05-31 2021-07-23 Video presentation method, electronic device, computer storage medium, and program product
CN202110837120.7 2021-07-23

Publications (1)

Publication Number Publication Date
WO2022252797A1 true WO2022252797A1 (en) 2022-12-08

Family

ID=84322580

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/084913 WO2022252797A1 (en) 2021-05-31 2022-04-01 Video presentation method, electronic device, computer storage medium and program product

Country Status (1)

Country Link
WO (1) WO2022252797A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101365116A (en) * 2008-09-28 2009-02-11 上海外轮理货有限公司 Real-time monitoring system for container loading, unloading and checking
CN101521745A (en) * 2009-04-14 2009-09-02 王广生 Multi-lens optical center superposing type omnibearing shooting device and panoramic shooting and retransmitting method
US20150195625A1 (en) * 2012-10-10 2015-07-09 Fujitsu Limited Information processing apparatus, information processing system, recording medium, and method for transmission and reception of moving image data
CN108965798A (en) * 2018-06-27 2018-12-07 山东大学 Distributed short distance panorama monitoring terminal, system and the layout method of seashore birds
CN108961421A (en) * 2018-06-27 2018-12-07 深圳中兴网信科技有限公司 Control method, control system and the computer readable storage medium of Virtual Space
CN110267008A (en) * 2019-06-28 2019-09-20 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101365116A (en) * 2008-09-28 2009-02-11 上海外轮理货有限公司 Real-time monitoring system for container loading, unloading and checking
CN101521745A (en) * 2009-04-14 2009-09-02 王广生 Multi-lens optical center superposing type omnibearing shooting device and panoramic shooting and retransmitting method
US20150195625A1 (en) * 2012-10-10 2015-07-09 Fujitsu Limited Information processing apparatus, information processing system, recording medium, and method for transmission and reception of moving image data
CN108965798A (en) * 2018-06-27 2018-12-07 山东大学 Distributed short distance panorama monitoring terminal, system and the layout method of seashore birds
CN108961421A (en) * 2018-06-27 2018-12-07 深圳中兴网信科技有限公司 Control method, control system and the computer readable storage medium of Virtual Space
CN110267008A (en) * 2019-06-28 2019-09-20 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium

Similar Documents

Publication Publication Date Title
CN112738010B (en) Data interaction method and system, interaction terminal and readable storage medium
CN110798697B (en) Video display method, device and system and electronic equipment
CN111970524B (en) Control method, device, system, equipment and medium for interactive live broadcast and microphone connection
WO2017113734A1 (en) Video multipoint same-screen play method and system
CN112738534B (en) Data processing method and system, server and storage medium
US20060221188A1 (en) Method and apparatus for composing images during video communications
WO2023169297A1 (en) Animation special effect generation method and apparatus, device, and medium
CN112738495B (en) Virtual viewpoint image generation method, system, electronic device and storage medium
CN108243318B (en) Method and device for realizing live broadcast of multiple image acquisition devices through single interface
WO2019156819A1 (en) Method and apparatus for processing and distributing live virtual reality content
US11223662B2 (en) Method, system, and non-transitory computer readable record medium for enhancing video quality of video call
CN110730340B (en) Virtual audience display method, system and storage medium based on lens transformation
EP4050889A1 (en) Conference device with multi-videostream capability
CN107580228B (en) Monitoring video processing method, device and equipment
GB2584282A (en) Image acquisition system and method
CN114374853A (en) Content display method and device, computer equipment and storage medium
WO2024027611A1 (en) Video live streaming method and apparatus, electronic device and storage medium
WO2022252797A1 (en) Video presentation method, electronic device, computer storage medium and program product
CN109862385B (en) Live broadcast method and device, computer readable storage medium and terminal equipment
CN115484486A (en) Video presentation method, electronic device, computer storage medium, and program product
CN113938617A (en) Multi-channel video display method and equipment, network camera and storage medium
CN112565799A (en) Video data processing method and device
CN112738646A (en) Data processing method, device, system, readable storage medium and server
CN112738009A (en) Data synchronization method, device, synchronization system, medium and server
US20230134623A1 (en) Method and system for controlling interactive live streaming co-hosting, device, and medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22814844

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22814844

Country of ref document: EP

Kind code of ref document: A1