WO2022206168A1 - Procédé et système de production de vidéo - Google Patents

Procédé et système de production de vidéo Download PDF

Info

Publication number
WO2022206168A1
WO2022206168A1 PCT/CN2022/074917 CN2022074917W WO2022206168A1 WO 2022206168 A1 WO2022206168 A1 WO 2022206168A1 CN 2022074917 W CN2022074917 W CN 2022074917W WO 2022206168 A1 WO2022206168 A1 WO 2022206168A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
cameras
code streams
time
images
Prior art date
Application number
PCT/CN2022/074917
Other languages
English (en)
Chinese (zh)
Inventor
屈小刚
刘新宝
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022206168A1 publication Critical patent/WO2022206168A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability

Definitions

  • the present application relates to the technical field of image processing, and in particular, to a video production method and system.
  • the present application provides a video production method and system, which are used to improve the flexibility of video production and improve user experience.
  • the present application provides a video production method, the method comprising: receiving a video production request message, where the video production request message includes an identifier of a first video production mode, a first time, and the first video production mode
  • the identifiers of the corresponding M cameras the M is an integer greater than 1
  • at least M images are obtained based on the identifiers of the M cameras and the first time, and the at least M images are stored by the M cameras. Obtained by shooting the same scene from different angles at different times; creating a first video file, where the first video file includes the at least M images.
  • a plurality of images can be obtained according to the identification of the camera and the start time included in the video production request message. images, and then make videos based on images at different moments. Compared with the time-still photos obtained in the prior art, the flexibility of video production can be improved, thereby improving user experience.
  • the acquiring at least M images based on the identifiers of the M cameras and the first time includes:
  • M video code streams are determined based on the identifiers of the M cameras; the M video code streams are obtained from shooting the same scene from different angles by the M cameras respectively; each video code stream in the M video code streams The target time stamp of the stream is related to the first time; the M images are obtained based on the target time stamp of each video code stream in the M video code streams; each video code in the M video code streams The streams have different destination timestamps.
  • the target time stamp of each video code stream can be directly or indirectly determined from the M video code streams according to the first time, and then the corresponding image can be obtained based on the target time stamp.
  • the target timestamp of each of the M video code streams is related to the preset sequence of the M cameras. It should be understood that the preset sequence of the M cameras is related to the installation positions or shooting angles of the cameras. In this way, the target time stamp can be determined through a preset sequence, so that the obtained images are relatively continuous, the produced video effect is better, and the user experience can be improved.
  • the target timestamp of each of the M video code streams is related to the sequence of the identifications of the M cameras. It should be understood that the order of identification of the M cameras is not specifically limited in the present application. Through the above technical solution, the target timestamp of the video code stream can be determined according to the sequence of the identification of the cameras, so that the image can be obtained according to the user's requirement.
  • the acquiring the M images based on the target timestamp of each video code stream in the M video code streams includes:
  • the video code stream corresponding to the target time stamp can be decoded, and then the image corresponding to the target time stamp can be obtained.
  • the video production request message further includes an identifier of the second video production mode and the identifiers of L cameras corresponding to the second video production mode; the L is an integer greater than 1;
  • the method further includes: acquiring L images based on the identifiers of the L cameras, the L images are obtained by shooting the same scene from different angles at the same time by the L cameras; making a second video file, the second The video file includes the at least M images and the L images.
  • the video production request message includes multiple video production modes
  • images corresponding to each video production mode can be obtained separately, and then the images obtained from the multiple video production modes can be formed into a group of images, and then based on A set of images to make a video.
  • the determining the M video code streams based on the identifiers of the M cameras includes: determining the M video code streams from the N video code streams based on the identifiers of the M cameras Video code streams; the N code streams are obtained based on N cameras shooting the same scene from different angles; the N cameras include the M cameras.
  • the N cameras can be understood as the total number of cameras, and the N video streams are the video streams of all cameras.
  • the video code stream corresponding to the video production mode can be determined from the N video code streams.
  • the video production request message further includes production scene parameters;
  • the N code streams are one of N live code streams, N time-shift code streams, or N on-demand code streams .
  • the M video code streams are determined from the N live broadcast code streams; the N live broadcast code streams are obtained in real time from N cameras. If the production scene parameter indicates that the video production is non-real-time production, the M video code streams are determined from the N time-shift code streams or the N on-demand code streams; the N on-demand code streams The stream or N time-shift code streams are obtained by saving the N live code streams.
  • N video streams can be recorded, and the capabilities of live broadcast, time-shift, and on-demand can be provided, so that videos can be produced according to different timeliness requirements. For example, if a video is required to be produced in real time on the spot, the live stream can be used to produce the video. If the video is required to be produced from the content a few minutes ago, the time-shift video stream can be used to produce the video. If the video is required to be produced after the shooting, the on-demand code can be used. Stream production video. Moreover, recording the code stream can re-produce the video when the video production result is not satisfied.
  • the present application provides a video production system.
  • a receiving module is configured to receive a video production request message, where the video production request message includes an identifier of a first video production mode, a first time, and the first video production mode.
  • the identifiers of the corresponding M cameras; the M is an integer greater than 1;
  • an acquisition module is configured to acquire at least M images based on the identifiers of the M cameras and the first time, and the at least M images are The M cameras are obtained by shooting the same scene from different angles at different times; the production module is configured to produce a first video file, and the first video file includes the at least M images.
  • the system further includes: a determination module, configured to determine M video code streams based on the identifiers of the M cameras; the M video code streams are respectively based on the M cameras from Obtained from shooting the same scene from different angles; the target time stamp of each video code stream in the M video code streams is related to the first time; the acquisition module is configured to be based on each of the M video code streams.
  • the target time stamp of the video code stream is used to obtain the M images; the target time stamp of each video code stream in the M video code streams is different.
  • the target timestamp of each of the M video code streams is related to the preset sequence of the M cameras.
  • the target timestamp of each of the M video code streams is related to the sequence of the identifications of the M cameras.
  • the obtaining module is specifically configured to obtain the M images based on the target timestamp of each video code stream in the M video code streams as follows: decoding the first video code stream A frame of video stream data corresponding to the target timestamp obtains a first image; the first video stream is any one of the M video streams, and the first image is any one of the M images .
  • the video production request message further includes an identifier of the second video production mode and the identifiers of L cameras corresponding to the second video production mode; the L is an integer greater than 1;
  • the acquisition module is further configured to acquire L images based on the identifiers of the L cameras, and the L images are obtained by shooting the same scene from different angles at the same moment by the L cameras;
  • the production module is further configured to produce a second video file, where the second video file includes the at least M images and the L images.
  • the determining module is specifically configured to determine the M video code streams based on the identifiers of the M cameras in the following manner: from the N video code streams based on the identifiers of the M cameras The M video code streams are determined in ; the N code streams are obtained based on N cameras shooting the same scene from different angles; the N cameras include the M cameras.
  • the video production request message further includes production scene parameters;
  • the N code streams are one of N live code streams, N time-shift code streams, or N on-demand code streams
  • the determining module is specifically configured to determine the M video code streams based on the identifiers of the M cameras as follows: if the production scene parameter indicates that the video production is real-time production, then The M video streams are determined in the live stream; the N live streams are acquired in real time from N cameras.
  • the M video codes are determined from the N time-shift code streams or the N on-demand code streams stream; the N on-demand code streams or N time-shift code streams are obtained by saving the N live code streams.
  • the present application provides a video production apparatus, the apparatus having the function of implementing the video production method of the first aspect or any possible implementation manner of the first aspect.
  • the functions can be implemented by hardware, or can be implemented by hardware executing corresponding software.
  • the apparatus includes a communication interface for receiving and sending data, a processor and a memory, the processor being configured to support the apparatus to perform the first aspect or any of the possible implementations of the first aspect corresponding function.
  • the memory is coupled to the processor and holds program instructions necessary for the apparatus.
  • a computer-readable storage medium where instructions are stored in the computer-readable storage medium, when the computer-readable storage medium runs on a computer, the computer causes the computer to execute the methods in the first aspect and the various embodiments.
  • a computer program product comprising instructions, which, when executed on a computer, cause the computer to perform the methods of the first aspect and the various embodiments above.
  • a chip is provided, and logic in the chip is used to execute the methods in the first aspect and each of the above embodiments.
  • FIG. 1A is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 1B is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 2 is a system architecture diagram provided by an embodiment of the present application.
  • FIG. 3 is a flowchart of a method for camera calibration provided by an embodiment of the present application.
  • FIG. 4 is a flowchart of a method for synchronizing a code stream provided by an embodiment of the present application
  • FIG. 5 is a flowchart of a method for recording a code stream provided by an embodiment of the present application.
  • FIG. 6 is a flowchart of a video production method provided by an embodiment of the present application.
  • FIGS. 7A-7G are schematic diagrams of video production effects provided by an embodiment of the present application.
  • FIG. 8 is a flowchart of another video production method provided by an embodiment of the present application.
  • FIG. 9 is a structural block diagram of a video production system provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a video production apparatus according to an embodiment of the present application.
  • Bullet time It is a photography technique used in movies, TV commercials or computer games to simulate variable speed special effects, such as enhanced slow motion, time still and other effects. The result is a series of still photos taken simultaneously through an array of multiple cameras and combined into a video.
  • the application scenarios of the embodiments of the present application are introduced.
  • FIG. 1A it is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • the application scenario may include a target, N (where N is a positive integer) cameras, and a server.
  • the target is the object to be photographed by the camera.
  • the camera is an image acquisition device, which can provide remote batch parameter configuration, single manual parameter configuration, photographing, encoding and streaming, hard synchronization and other functions.
  • the N cameras can be arranged around the target, and can simultaneously photograph the target to obtain photos or videos.
  • the server may include multiple functional modules, such as a calibration module, a camera stream access module, a recording module, and a video production module.
  • the calibration module is used to obtain internal parameters and external parameters of the camera according to the photos taken by the camera, so as to eliminate the focus jitter of the camera.
  • the camera stream access module is used to receive the code stream of the camera, synchronize the code stream, and convert the format of the code stream to the same format.
  • the recording module is used to save the code stream of the camera from the camera stream access module, and provides the capabilities of live broadcast, time shift, recording and broadcasting, and on-demand.
  • the video production module is used to extract images from the captured live stream, time-shift stream or on-demand stream, and then process the images to finally synthesize a wonderful video.
  • the number of servers may be one or multiple.
  • the above functional modules may be located on the same or different servers.
  • the embodiment of the present application may include four servers, for example, a calibration server 11 , a camera stream access server 12 , a recording server 13 , and a wonderful moment.
  • Server 14 (or: video production server). It should be understood that, for the functions performed by each server, reference may be made to the introduction of the above-mentioned various functional modules, which will not be repeated here.
  • a system architecture diagram provided by an embodiment of the present application is shown.
  • a control plane 21 an operation and maintenance plane 22 and a media plane 23 may be included.
  • the control surface 21 is used to provide an operation interface of the wonderful moment, such as configuring parameters required for making a video.
  • the operation and maintenance plane 22 is used to provide system operation and maintenance capabilities, including alarms, performance indicators, logs, configuration, and upgrades.
  • the media plane 23 is used to provide functions such as camera calibration, camera streaming access, recording, and video production.
  • a video production method provided by an embodiment of the present application is described in detail below.
  • the method may specifically include the following processes: 1. camera calibration; 2. code stream synchronization; 3. code stream recording; 4. video production.
  • the above four processes are introduced in sequence below.
  • N is a positive integer, and N is greater than or equal to 1
  • the server uses the server in FIG. 1B as an example to describe the method.
  • the camera can be calibrated to obtain the internal and external parameters of the camera, and then the image captured by the camera can be corrected according to the internal and external parameters of the camera in the process of video production to eliminate focus shake.
  • FIG. 3 a flowchart of a method for camera calibration provided by an embodiment of the present application is shown in FIG. 3 .
  • the method may include the following steps:
  • the calibration server receives a calibration request message.
  • the calibration request message may include the world coordinates of the center point of the shooting site, the world coordinates of the calibration object, the resolution of the cameras, the number of cameras, and the like. It should be understood that the world coordinates of the calibration object are the world coordinates of the target object to be photographed by the camera.
  • the user can log in to the video production platform on the terminal device through the account password, and then create a calibration task on the operation interface of the video production platform, and the terminal device can respond to the user's operation and send the calibration server to the calibration server. request message.
  • the calibration server can receive the calibration request message.
  • the terminal device may be a mobile phone, a tablet computer, a wearable device (for example, a watch, a wristband, a smart glasses, a smart helmet, etc.), a vehicle-mounted device, an augmented reality (AR)/virtual reality (VR) ) device, notebook computer, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook, personal digital assistant (personal digital assistant, PDA), etc., which are not limited in the embodiments of the present application.
  • a wearable device for example, a watch, a wristband, a smart glasses, a smart helmet, etc.
  • VR augmented reality
  • VR virtual reality
  • notebook computer notebook computer
  • ultra-mobile personal computer ultra-mobile personal computer
  • UMPC ultra-mobile personal computer
  • netbook personal digital assistant
  • PDA personal digital assistant
  • the terminal device and the calibration server can be connected through a communication network.
  • the communication network may be a local area network or a wide area network switched by a relay device.
  • the communication network may be a wireless fidelity (wifi) hotspot network, a bluetooth (BT) network or a near field communication (NFC) network and other short-range communication networks.
  • the communication network may be a third-generation mobile communication technology (3rd-generation wireless telephone technology, 3G) network, a fourth-generation mobile communication technology (the 4th generation mobile communication technology, 4G) network ) network, the 5th-generation mobile communication technology (5G) network, the future evolved public land mobile network (PLMN) or the Internet, etc.
  • 3G third-generation mobile communication technology
  • 4G fourth-generation mobile communication technology
  • 5G 5th-generation mobile communication technology
  • PLMN future evolved public land mobile network
  • the calibration server sends a photographing instruction to at least one camera.
  • the N cameras may be triggered to take pictures. That is, the calibration server can trigger all cameras to take pictures.
  • the calibration server may trigger the main camera (one camera) to take a picture, and then the main camera triggers the slave camera to take a picture.
  • the distinction between the master and slave cameras can be set by the user. In general, you can set up one master camera, and then set the other cameras as slave cameras. Exemplarily, the user can set the camera to be the master camera or the slave camera by toggling the master-slave setting switch/button on the camera.
  • the calibration server obtains N photos from the N cameras.
  • the camera can receive a photographing instruction, and then the camera can perform a photographing function according to the photographing instruction, so that each camera can obtain a photographed photograph.
  • the calibration server may obtain N photos from the N cameras.
  • the calibration server obtains the internal and external parameters of the N cameras according to the N photos.
  • the internal and external parameters of each camera can be obtained, so as to correct the image.
  • the calibration server may upload the camera's internal parameters and external parameters to the wonderful moment server, so that the image can be corrected during subsequent video production.
  • the user can also view the calibration results to avoid omission of the calibration results due to camera failure and other reasons, thereby affecting the subsequent video production process.
  • the code stream synchronization here includes format synchronization and time synchronization. That is, the code streams of the cameras can be adjusted to the same format, and the video frames at the same time in the code streams of different cameras can be set to the same timestamp, which facilitates the processing of the code streams.
  • the format of the code stream may include formats such as dash, hls, rtsp, and rtmp.
  • FIG. 4 a flowchart of a method for synchronizing a code stream provided by an embodiment of the present application.
  • the method may include the following steps:
  • the camera stream access server receives a camera stream access request message.
  • the user may create a camera stream access task on the operation interface of the terminal device, and then the terminal device may respond to the user's operation and send a camera stream access request message to the camera stream access server.
  • the camera stream access request message may include camera identifiers of N cameras, such as 1, 2, 3, and so on.
  • the camera stream access server sends a shooting instruction to the N cameras to instruct the N cameras to start shooting.
  • the camera stream access server After the camera stream access server receives the camera stream access request message, it can trigger shooting instructions to N cameras. Correspondingly, after receiving the shooting instruction, the camera can start shooting.
  • the camera can encode the code stream when shooting, so as to distinguish the code stream of different cameras.
  • the encoding of the code stream may be the same as or related to the identification of the camera. For example, if the ID of the camera is 1, the code of the code stream captured by the camera may also be 1; or if the ID of the camera is 1, the code of the code stream captured by the camera may be 1-1, etc., which is not limited in this application. .
  • the camera stream access server obtains N code streams from N cameras.
  • the camera After receiving the shooting instruction, the camera can perform the shooting function to output the code stream.
  • the camera stream access server can obtain the corresponding code stream from each camera.
  • the camera stream access server converts the N code streams into the same format, and synchronizes the time of the N code streams.
  • the code stream formats may be different. Therefore, in order to facilitate subsequent video production, the format of the code stream needs to be converted so that the code stream format is the same format.
  • the format of the code stream can be uniformly converted to the dash format. Of course, it can also be uniformly converted to rtsp format, etc., which is not limited in this application.
  • the camera stream access server may further perform time synchronization on the video frames at the same time in the N code streams, that is, set the video frames at the same time to the same timestamp.
  • the camera stream access server may cache the acquired code stream.
  • the N barcode streams of N cameras can be converted into the same format.
  • a flowchart of a code stream recording method provided by an embodiment of the present application, referring to FIG. 5 , the method may include the following steps:
  • S501 The recording server receives a recording request message.
  • the user may create a recording task on the operation interface of the terminal device, and the terminal device may send a recording request message to the recording server in response to the user's operation.
  • the recording request message may include camera identifiers of N cameras.
  • the recording server obtains and saves the video code stream from the camera stream access server.
  • the camera stream access server can obtain the code stream from the camera, and then cache the code stream, so that the recording server can obtain the video code stream from the camera stream access server, and then The stream is saved, so that if you are not satisfied with the produced video later, you can use the saved video stream to re-produce, which can improve the user experience.
  • the recording server can provide live, time-shift, and on-demand services, and then the user can enable at least one of the live broadcast, time-shift, and on-demand services, so that the user can select a corresponding video according to different timeliness requirements when making a video. video stream.
  • a flowchart of a video production method provided by an embodiment of the present application, referring to FIG. 6 , the method may include the following steps:
  • S601 The wonderful moment server receives a video production request message.
  • the video production request message may include the identification of the video production mode, the start time corresponding to the video production mode, and the identification of the camera corresponding to the video production mode.
  • the user can configure the parameters of the wonderful moment on the operation interface of the terminal device, and then trigger the video production (for example, by clicking the "produce” button on the operation interface), and the terminal device can respond to the user's operation, Send a video production request message to the Highlights server.
  • the video production request message may include parameters of wonderful moments.
  • the parameters of the wonderful moment may include: the wonderful moment (ie, the start time), the track point list (the arrangement order between the identifications of the cameras), the wonderful moment mode (also referred to as: video production mode), The corresponding camera logo, special effects (for example, zoom in and zoom out, slow motion), output resolution, etc. in the wonderful moment mode.
  • the track point list may include the identification of the cameras through which the video was produced.
  • the camera positions selected for video production are: 1-10, and the sequence of the camera positions during video production is: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, then the start and end camera positions (that is, which camera the video starts from and which camera ends) are 1 and 10, and the camera positions passing in the middle are: 2, 3, 4, 5, 6, 7, 8, 9.
  • the sorting of the seat numbers is not limited to the above order, for example, it can also be sorted according to the size relationship of the seat numbers, such as 1, 3, 5, 7, 9, 2, 4, 6, 8, 10, etc.
  • the application is not limited in this regard.
  • the video production mode may include three modes: dynamic flow mode, static bullet time mode, and fixed camera mode.
  • dynamic streaming mode may be denoted as the “first video production mode”
  • static bullet time mode may be denoted as the “second video production mode”
  • fixed camera position mode Recorded as “third video production mode”.
  • video production modes in the embodiments of the present application include but are not limited to the above three modes.
  • the dynamic flow mode can be understood as: selecting images from videos of multiple cameras within a certain time range, and then synthesizing the videos. That is, time is changing and space is also changing.
  • the static bullet time mode can be understood as: selecting images at the same time from the images of multiple cameras, and then synthesizing the images of different cameras at the same time into a video. That is, time does not change, space changes.
  • the fixed camera mode can be understood as: selecting a video of a certain time range from the video of one camera. That is, time is changing and space is not changing.
  • the video production request message may include an identifier of at least one video production mode, a start time corresponding to at least one video production mode, and a camera identifier corresponding to each video mode.
  • it can include the following situations:
  • the video production request message includes the identifier of the first video production mode, the start time T1 corresponding to the first video production mode, and the identifiers of M cameras corresponding to the first video production mode.
  • M is an integer greater than 1, and M is less than or equal to N.
  • FIG. 7A is a schematic diagram of the effect of a dynamic flow mode provided by an embodiment of the present application.
  • the video production request message includes the identifier of the second video production mode, the start time T2 corresponding to the second video production mode, and the identifiers of the L cameras corresponding to the second video production mode.
  • L is an integer greater than 1, and L is less than or equal to N.
  • FIG. 7B is a schematic diagram of the effect of a static bullet time mode provided by an embodiment of the present application.
  • the video production request message includes the identifier of the third video production mode, the start and end times T3 corresponding to the third video production mode, and the identifiers of the P cameras corresponding to the third video production mode.
  • P is equal to 1.
  • FIG. 7C is a schematic diagram of the effect of a fixed camera position mode provided by an embodiment of the present application.
  • T1 and T2 represent the starting time
  • T3 represents the time range
  • the magnitude relationship between M, L, and P is not specifically limited in the embodiments of the present application.
  • the video production request message includes the identifier of the first video production mode, the start time T1 corresponding to the first video production mode or the start time T2 corresponding to the second video production mode, and the corresponding start time T2 of the first video production mode.
  • FIG. 7D it is a schematic diagram of the effect of mixing a dynamic flow mode and a static bullet time mode according to an embodiment of the present application.
  • the video production request message includes the identification of the first video production mode, the start time T1 corresponding to the first video production mode or the start and end time T3 corresponding to the third video production mode, and the M corresponding to the first video production mode.
  • FIG. 7E is a schematic diagram of the effect of mixing a dynamic flow mode and a fixed camera position mode provided by an embodiment of the present application.
  • the video production request message includes the identification of the second video production mode, the start time T2 corresponding to the second video production mode, the start and end time T3 corresponding to the third video production mode, and the L corresponding to the second video production mode.
  • FIG. 7F is a schematic diagram of the effect of mixing a static bullet time mode and a fixed camera position mode provided by an embodiment of the present application.
  • the video production request message includes the identification of the first video production mode, the start time T1 corresponding to the first video production mode or the start time T2 corresponding to the second video production mode or the corresponding start time T2 of the third video production mode.
  • Start and end time T3 the logos of the M cameras corresponding to the first video production mode, the logos of the second video production mode, the logos of the L cameras corresponding to the second video production mode, the logos of the third video production mode, and the third video production mode
  • the IDs of the P cameras corresponding to the mode Exemplarily, referring to FIG. 7G , which is a schematic diagram of the effect of mixing a dynamic flow mode, a static bullet time mode, and a fixed camera position mode provided by an embodiment of the present application.
  • the video production request message if the video production mode included in the video production request message is one, that is, the dynamic streaming mode or the static bullet time mode, then the video production request message also includes Start time; if the video production mode included in the video production request message is the fixed camera mode, the video production request message also includes the start and end time (ie, time range).
  • the start time T1 of the dynamic streaming mode or the start time of the static bullet time mode is provided T2 is enough; if it is dynamic flow mode and fixed camera mode, then provide the start time T1 of dynamic flow mode or the start and end time range T3 of fixed camera mode; if it is static bullet time mode and fixed camera mode, then provide The starting time T2 of the static bullet time mode or the starting and ending time range T3 of the fixed camera position mode.
  • one of the start time T2 of the static bullet time mode, the start time T1 of the dynamic flow mode, or the start and end time T3 of the fixed camera mode can be provided. .
  • the start time of each video production mode may also be included in the video production request message.
  • FIGS. 7A-7G are only schematic illustrations, and in practical applications, the effects of the video production mode are not limited to the above examples.
  • the wonderful moment server determines the corresponding video code stream according to the identification of the camera corresponding to the video production mode.
  • the obtained video code streams are also different.
  • the acquired video streams are introduced based on the above seven situations respectively below.
  • the first case the wonderful moment server acquires M video code streams according to the identifiers of the M cameras corresponding to the first video production mode.
  • the M video code streams are obtained respectively based on the shooting of the same scene by M cameras from different angles.
  • the momentous moment server can start to acquire the code stream from the code stream with the camera identifier 1 saved in the recording server until the camera identifier 10 is obtained.
  • the second case the wonderful moment server obtains L video code streams according to the identifiers of L cameras corresponding to the second video production mode.
  • the L video streams are obtained based on L cameras shooting the same scene from different angles.
  • the third situation the wonderful moment server obtains one video stream according to the identifier of one camera corresponding to the third video production mode.
  • one video code stream is obtained by one camera shooting the same scene from the same angle in a certain period of time.
  • the fourth situation the wonderful moment server acquires (M+L) video streams according to the identifiers of the M cameras corresponding to the first video production mode and the identifiers of the L cameras corresponding to the second video production mode.
  • Case 6 The wonderful moment server acquires (L+P) video streams according to the identifiers of the L cameras corresponding to the second video production mode and the identifiers of the P cameras corresponding to the third video production mode.
  • the seventh situation the wonderful moment server obtains (M) according to the identifiers of M cameras corresponding to the first video production mode, the identifiers of L cameras corresponding to the second video production mode, and the identifiers of P cameras corresponding to the third video production mode. +L+P) video streams.
  • the video production request message when the video production request message includes at least two video production modes, the video production request message may also include the sequence of the video production modes, so that the wonderful moment server can obtain the code stream. Acquire the code stream in sequence according to the video production mode.
  • the video production mode includes a dynamic flow mode and a static bullet time mode
  • the dynamic flow mode is before the static bullet time mode
  • the IDs of the cameras corresponding to the dynamic flow mode are 1-5
  • the cameras corresponding to the static bullet time mode are The identification of the camera is 5-3
  • the wonderful moment server can obtain the code stream from the code stream with the identification of the camera 1-5 saved in the recording server in turn, and then continue to follow the static bullet time mode after the dynamic streaming mode ends.
  • the ID of the corresponding camera obtains the code stream from the recording server.
  • S603 The wonderful moment server acquires a corresponding image from the video stream determined in S602 based on the start time corresponding to the video production mode.
  • the first case at least one image is acquired from each of the M video code streams based on the starting time T1 of the dynamic streaming mode, to obtain at least M images.
  • the at least M images are from video frames with different time stamps in the M video code streams.
  • the target timestamp of each of the M video code streams may be determined directly or indirectly through the start time T1. Or in other words, the target timestamp of each video code stream in the M video code streams is related to the start time T1.
  • the target time stamp is the time stamp of the target video frame determined in each video code stream.
  • the M images can be obtained based on the target video frame in each video stream. For example, it is assumed that the target video frame of the first video code stream in the M video code streams is the first video frame, and the target video frame of the second video code stream in the M video code streams is the second video frame, M The target video frame of the third video code stream in the video code streams is the third video frame . . .
  • the timestamp of the first video frame can be determined by the start time T1 (for example, the video frame with the timestamp closest to T1 in the first video stream can be taken as the first video frame), and the time of the second video frame
  • the timestamp can be determined by the timestamp of the first video frame (for example, it can be the timestamp of the next frame corresponding to the timestamp of the first video frame)
  • the timestamp of the third video frame can be determined by the timestamp of the second video frame (for example, it may be the time stamp of the next frame after the time stamp of the second video frame), and so on, the time stamp of each video frame may be determined.
  • the target timestamp of each of the M video code streams is related to the preset order of the M cameras.
  • the preset order is related to the installation position or shooting angle of the camera.
  • the target timestamp of each of the M video streams is related to the order of identification of the M cameras.
  • the identification of the camera with the smallest number may be determined among the identifications of the M cameras, for example, it is recorded as the first identification, and then the first video frame in the video code stream of the camera corresponding to the first identification is determined. , and finally the first image is obtained based on the first video frame, where the timestamp of the first video frame is closest to T1.
  • the identification of the camera with the next smallest number can be determined among the identifications of the M cameras, for example, it is recorded as the second identification, and then the second video frame in the video code stream of the camera corresponding to the second identification is determined, and finally based on the second identification
  • the video frame gets the second image.
  • the timestamp of the second video frame is the same as the timestamp of the next frame of the first video frame.
  • the identification of the camera corresponding to the dynamic flow mode includes 5 identifications, such as 1-5
  • the identification of the camera can be determined first.
  • the identification of the camera with the smallest number is 1, and then the first video frame corresponding to the timestamp closest to T1 (for example, 9 minutes and 0 seconds) is determined in the video stream of the camera with the identification of the camera as 1, and finally based on the first video frame
  • the video frame acquires an image of 9 minutes and 0 seconds.
  • Two video frames are acquired for 9 minutes, 0 seconds and 20 milliseconds of the image. It should be understood that the video frame corresponding to 9 minutes, 0 seconds and 20 milliseconds is the next video frame of the video frame corresponding to 9 minutes and 0 seconds.
  • the images may not be acquired according to the numerical order of the identifiers of the cameras, and the images may be acquired in sequence directly according to the arrangement order of the identifiers of the cameras.
  • the video production request message also includes the arrangement order of the identities of the cameras.
  • the arrangement order of the identities of the cameras is: 2, 1, 3, 4, and 5, and the starting time T1 of the dynamic flow mode is: 9 minutes, 0 seconds and 0 milliseconds
  • the first video frame corresponding to the timestamp closest to T1 (for example, 9 minutes 0 seconds) can be determined in the video code stream of the camera with the identification of 2, and then based on the first video frame
  • the video frame acquires an image of 9 minutes and 0 seconds.
  • a second video frame corresponding to a timestamp of 9 minutes, 0 seconds, and 20 milliseconds may be determined in the video stream of the camera whose ID is 1, and an image of 9 minutes, 0 seconds, and 20 milliseconds may be obtained based on the second video frame. Push through this column until you get all the images.
  • the video frame may be decoded to obtain a corresponding image.
  • the second case based on the starting time T2 of the static bullet time mode, an image is acquired from each of the L video code streams to obtain L images.
  • the L images are from video frames with the same timestamp in the L video streams.
  • T2 is a timestamp of 9 minutes, 0 seconds, and 20 milliseconds
  • L is 3, that is, the identification of the camera corresponding to the static bullet time mode includes three, assuming that it includes 1, 2, and 3, then the identification of the camera can be obtained as Images corresponding to 9 minutes, 0 seconds, and 20 milliseconds are obtained from the video streams of 1, 2, and 3, respectively.
  • at least P images are from video frames with different time stamps in one video stream.
  • the fourth case based on the starting time T1 of the dynamic streaming mode or the starting time T2 of the static bullet time mode, at least one image is obtained from each of the M video streams, and at least M images are obtained, And obtain an image from each of the L video code streams to obtain L images, and obtain at least M+L images in total.
  • the fifth situation based on the start time T1 of the dynamic flow mode or the start and end time T3 of the fixed camera mode, obtain at least one image from each of the M video code streams to obtain at least M images, and Obtain at least P images from the P video code streams, obtain at least P images, and obtain at least M+P images in total.
  • the sixth case based on the start time T2 of the static bullet time mode or the start and end time T3 of the fixed camera mode, an image is obtained from each of the L video streams, and L images are obtained, and the images are obtained from P Obtain at least P images from the video code streams, obtain at least P images, and obtain at least L+P images in total.
  • the seventh situation based on the starting time T1 of the dynamic streaming mode or the starting time T2 based on the static bullet time mode or the starting and ending time T3 of the fixed camera mode, obtain at least one video stream from each of the M video streams.
  • a kind of image obtain at least M images, and obtain one image from each video code stream of L video code streams, obtain L images, obtain at least P images from P video code streams, obtain at least P images, resulting in at least M+L+P images in total.
  • the video production request message when the video production request message includes at least two video production modes, the video production request message may also include the sequence of the video production modes, so that the wonderful moment server can obtain the image. Start acquiring images according to the start time of the video production mode located in the front, and then determine the start time of the video production mode located in the back based on the timestamp of the video frame corresponding to the image acquired in the video production mode located in the front.
  • the video production mode includes a dynamic flow mode and a static bullet time mode
  • the dynamic flow mode is before the static bullet time mode
  • the initial time corresponding to the dynamic flow mode is T1
  • the identification of the camera corresponding to the dynamic flow mode is 1-5
  • the ID of the camera corresponding to the static bullet time mode is 5-3
  • the wonderful moment server can obtain images from the video stream with the ID of the camera 1-5 in sequence, and then continue after the dynamic streaming mode ends.
  • the end time of the video production mode located in the front is not necessarily the end time of the video production mode located in the back.
  • the start time of the video production mode is not necessarily the end time of the video production mode located in the back.
  • S604 The wonderful moment server processes the acquired image to obtain a video file.
  • the wonderful moment server can obtain a set of images corresponding to one video production mode; when the video production request message includes at least two video production modes, the wonderful moment server A set of images corresponding to each video production mode can be obtained, for example, the first set of images and the second set of images can be obtained, and then the left and right images of the first set of images and the second set of images can be processed, Get a video file.
  • the wonderful moment server may perform focus anti-shake correction on the image according to the camera's internal and external parameters obtained during the camera calibration process.
  • the video production modes are: dynamic flow mode - static bullet time mode - dynamic flow mode, and the track point list is: 1-100, 100-50, 50-100, then the wonderful moment server can obtain 200 images, then focus stabilization correction on 200 images.
  • special effects may also be performed on the images according to the special effects included in the parameters of the wonderful moments, and then the processed images are encoded, and finally packaged into media files, that is, video files.
  • the wonderful moment server can generate wonderful moments in any video production mode, and the user can select the wonderful moment mode according to their own needs, so that the videos of the wonderful moments are more abundant, and the user experience can be improved.
  • the code stream captured by the camera is saved, so that the user can make a video based on the saved code stream at any time.
  • the video production request message may further include production scene parameters, where the production scene parameters are used to indicate the time of video production, such as real-time production or non-real-time production.
  • the wonderful moment server can select one of the N live streams, N time-shift code streams, or N on-demand code streams saved during the stream recording process according to the production scene parameters. to get the video stream.
  • the highlight moment server may obtain M video code streams from N live streams; or the highlight moment server may obtain M video code streams from N time-shift code streams, and so on.
  • the way to obtain the video stream is different. Specifically, it can include the following two situations:
  • Scenario 1 If the production scene parameter indicates that the video production is real-time production, the video stream is obtained from N live streams. Among them, N live code streams are obtained in real time from N cameras, and N cameras are used to shoot the same scene from different angles.
  • Scenario 2 If the production scene parameter indicates that the video production is non-real-time production, the video code stream is obtained from N time-shift code streams or N on-demand code streams. Wherein, N on-demand code streams or N time-shift code streams are obtained by saving N live code streams.
  • the highlight moment server may acquire M video streams from the N live streams. If the production scene parameter indicates non-real-time video production, and the video production mode is the first video production mode, the highlight moment server may acquire M video streams from N on-demand streams or N time-shift streams.
  • the video can be produced from the live stream by obtaining the video stream; if the video is required to be produced with content taken at a time prior to the current time, the video can be obtained from the time-shifted stream If you want to make a video after shooting, you can obtain a video stream to make a video by using the on-demand code stream.
  • the saved code stream can be used to produce the video, so that when the user is not satisfied with the last video produced, he can re-create the video based on the saved code stream, Thereby improving the user experience.
  • FIG. 8 a flowchart of another video production method provided by the embodiment of the present application, referring to FIG. 8 , the method may include the following steps:
  • Step 1 The user creates a calibration task and triggers a calibration request message.
  • Step 2 The calibration server triggers the camera to take pictures.
  • the camera includes a plurality of.
  • Step 3 The calibration server gets the photos from the camera.
  • Step 4 The calibration server runs the calibration algorithm to obtain the internal and external parameters of the camera.
  • Step 5 The calibration server uploads the camera's internal and external parameters to the wonderful moment server.
  • step 5 the user can view the calibration result.
  • steps 1 to 5 are the process of camera calibration.
  • steps 1 to 5 are the process of camera calibration.
  • Step 6 The user creates a camera streaming access task and triggers a camera streaming access request message.
  • Step 7 The camera stream accesses the server to trigger the camera to start shooting.
  • Step 8 The camera stream accesses the server to obtain the code stream from the camera.
  • Step 9 The camera stream accesses the server to convert the code stream format into a unified format, and synchronize the time of the code stream.
  • Step 10 The camera stream is connected to the server to cache the code stream.
  • step 6 to step 10 are the process of code stream synchronization.
  • Step 11 The user creates a recording task and triggers a recording request message.
  • Step 12 The recording server obtains the buffered code stream from the camera stream access server.
  • Step 13 The recording server saves the stream, and enables live broadcast, time shift, and on-demand services.
  • step 13 the user can view the recording status of the code stream to avoid abnormality.
  • steps 11 to 13 are the process of recording the code stream.
  • steps 11 to 13 are the process of recording the code stream.
  • Step 14 The user configures the parameters of the wonderful moment, creates a production task, and triggers a production request message.
  • Step 15 The wonderful moment server obtains the code stream from the recording server according to the identification of the camera.
  • Step 16 The wonderful moment server extracts video frames from the code stream according to the initial time included in the production request message, and decodes the video frames to obtain images.
  • Step 17 The wonderful moment server corrects the image according to the internal and external parameters of the camera uploaded by the calibration server.
  • Step 18 The wonderful moment server performs special effects processing on the image.
  • Step 19 The wonderful moment server encodes the processed image and encapsulates it into a file.
  • the wonderful moment server may display prompt information on the terminal device to remind the user that the video production is completed.
  • steps 14 to 19 are the process of video production.
  • steps 14 to 19 are the process of video production.
  • the present application further provides a video production system.
  • the video production system 900 may include: a receiving module 901 , an obtaining module 902 , and a production module 903 .
  • the receiving module 901 is configured to receive a video production request message, where the video production request message includes an identification of a first video production mode, a first time and identifications of M cameras corresponding to the first video production mode;
  • the M is an integer greater than 1;
  • the acquiring module 902 is configured to acquire at least M images based on the identifiers of the M cameras received by the receiving module 901 and the first time, and the at least M images are obtained by the M The cameras are obtained by shooting the same scene from different angles at different times;
  • the production module 903 is configured to produce a first video file, where the first video file includes the at least M images.
  • the system further includes: a determining module 904, configured to determine M video code streams based on the identifiers of the M cameras; the M video code streams are respectively based on the M cameras It is obtained by shooting the same scene from different angles; the target time stamp of each video code stream in the M video code streams is related to the first time.
  • a determining module 904 configured to determine M video code streams based on the identifiers of the M cameras; the M video code streams are respectively based on the M cameras It is obtained by shooting the same scene from different angles; the target time stamp of each video code stream in the M video code streams is related to the first time.
  • the obtaining module 902 is configured to obtain the M images based on the target time stamp of each video code stream in the M video code streams; the target time stamp of each video code stream in the M video code streams different.
  • the target timestamp of each of the M video code streams is related to the preset sequence of the M cameras.
  • the target timestamp of each of the M video code streams is related to the sequence of the identifications of the M cameras.
  • the obtaining module 902 is specifically configured to obtain the M images based on the target timestamp of each of the M video code streams in the following manner:
  • the video production request message further includes the identifier of the second video production mode and the identifiers of L cameras corresponding to the second video production mode; the L is an integer greater than 1.
  • the acquiring module 902 is further configured to acquire L images based on the identifiers of the L cameras, where the L images are obtained by shooting the same scene from different angles at the same moment by the L cameras;
  • the producing module 903 is further configured to produce a second video file, where the second video file includes the at least M images and the L images.
  • the determining module 904 is specifically configured to determine the M video code streams based on the identifiers of the M cameras as follows:
  • the M video code streams are determined from the N video code streams based on the identifiers of the M cameras; the N code streams are obtained based on the N cameras shooting the same scene from different angles; the N cameras include the M cameras.
  • the video production request message further includes production scene parameters;
  • the N code streams are one of N live code streams, N time-shift code streams, or N on-demand code streams .
  • the determining module 904 is specifically configured to determine the M video streams based on the identifiers of the M cameras as follows:
  • the M video code streams are determined from the N live broadcast code streams; the N live broadcast code streams are obtained in real time from N cameras.
  • the M video codes are determined from the N time-shift code streams or the N on-demand code streams stream; the N on-demand code streams or N time-shift code streams are obtained by saving the N live code streams.
  • the division of modules in the embodiments of the present application is schematic, and is only a logical function division. In actual implementation, there may be other division methods.
  • the functional modules in the various embodiments of the present application may be integrated into one processing unit. In the device, it can also exist physically alone, or two or more modules can be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules.
  • a video production apparatus 1000 provided by an embodiment of the present application.
  • the video production apparatus 1000 includes at least one processor 1002 for implementing or supporting the video production apparatus 1000 to realize the diagram provided by the embodiment of the present application.
  • the processor 1002 may acquire at least M images based on the identifiers of the M cameras and the first time, and the at least M images are obtained by shooting the same scene from the M cameras at different times and from different angles. ; and make a first video file, the first video file includes the at least M images, etc.
  • the processor 1002 may acquire at least M images based on the identifiers of the M cameras and the first time, and the at least M images are obtained by shooting the same scene from the M cameras at different times and from different angles. ; and make a first video file, the first video file includes the at least M images, etc.
  • the video production apparatus 1000 may also include at least one memory 1001 for storing program instructions.
  • Memory 1001 and processor 1002 are coupled.
  • the coupling in the embodiments of the present application is an indirect coupling or communication connection between devices, units or modules, which may be in electrical, mechanical or other forms, and is used for information exchange between devices, units or modules.
  • the processor 1002 may cooperate with the memory 1001 .
  • Processor 1002 may execute program instructions and/or data stored in memory 1001 . At least one of the at least one memory may be included in the processor.
  • the video production apparatus 1000 may also include a communication interface 1003 for communicating with other devices through a transmission medium.
  • the processor 1002 can use the communication interface 1003 to send and receive data.
  • the present application does not limit the specific connection medium between the above-mentioned communication interface 1003 , the processor 1002 and the memory 1001 .
  • the memory 1001 , the processor 1002 , and the communication interface 1003 are connected through a bus 1004 , and the bus is represented by a thick line in FIG. 10 .
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of presentation, only one thick line is used in FIG. 10, but it does not mean that there is only one bus or one type of bus.
  • the processor 1002 may be a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, which can implement Alternatively, each method, step, and logic block diagram disclosed in the embodiments of the present application are executed.
  • a general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the methods disclosed in conjunction with the embodiments of the present application may be directly executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
  • the memory 1001 may be a non-volatile memory, such as a hard disk drive (HDD) or a solid-state drive (SSD), etc., or a volatile memory (volatile memory), For example RAM.
  • Memory is, but is not limited to, any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • the memory in this embodiment of the present application may also be a circuit or any other device capable of implementing a storage function, for storing program instructions.
  • the computer-executed instructions in the embodiment of the present application may also be referred to as application code, which is not specifically limited in the embodiment of the present application.
  • Embodiments of the present application further provide a computer-readable storage medium, including instructions, which, when executed on a computer, cause the computer to execute the method of the foregoing embodiment.
  • Embodiments of the present application also provide a computer program product, including instructions, which, when executed on a computer, cause the computer to execute the methods of the above embodiments.
  • the embodiment of the present application further provides a chip, and the logic in the chip is used to execute the method of the above embodiment.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions
  • the apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Studio Devices (AREA)

Abstract

Procédé et système de production de vidéo. Le procédé consiste à : recevoir un message de demande de production de vidéo, le message de demande de production vidéo comprenant un identifiant d'un premier mode de production de vidéo, un premier moment et des identifiants de M caméras correspondant au premier mode de production de vidéo, et M étant un nombre entier supérieur à 1 ; acquérir au moins M images sur la base des identifiants des M caméras et du premier moment, lesdites M images étant obtenues par photographie d'une même scène à partir de différents angles à différents instants par les M caméras ; et produire un premier fichier vidéo, le premier fichier vidéo comprenant lesdites M images. Au moyen de la solution de la présente demande, des images à différents instants peuvent être obtenues, puis une vidéo peut être produite selon les images à différents instants. Par comparaison à l'obtention de photos à un moment fixe dans l'état de la technique, la flexibilité de la production de vidéo peut être améliorée, et l'expérience utilisateur est améliorée.
PCT/CN2022/074917 2021-03-31 2022-01-29 Procédé et système de production de vidéo WO2022206168A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110351079.2 2021-03-31
CN202110351079.2A CN115150563A (zh) 2021-03-31 2021-03-31 一种视频制作方法及系统

Publications (1)

Publication Number Publication Date
WO2022206168A1 true WO2022206168A1 (fr) 2022-10-06

Family

ID=83403890

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/074917 WO2022206168A1 (fr) 2021-03-31 2022-01-29 Procédé et système de production de vidéo

Country Status (2)

Country Link
CN (1) CN115150563A (fr)
WO (1) WO2022206168A1 (fr)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050286759A1 (en) * 2004-06-28 2005-12-29 Microsoft Corporation Interactive viewpoint video system and process employing overlapping images of a scene captured from viewpoints forming a grid
EP2161925A2 (fr) * 2008-09-07 2010-03-10 Sportvu Ltd. Procédé et système pour fusionner des flux vidéo
CN105227837A (zh) * 2015-09-24 2016-01-06 努比亚技术有限公司 一种图像合成方法和装置
CN106131581A (zh) * 2016-07-12 2016-11-16 上海摩象网络科技有限公司 混合图像的全景视频制作技术
WO2019183235A1 (fr) * 2018-03-21 2019-09-26 Second Spectrum, Inc. Procédés et systèmes de reconnaissance de motif spatiotemporel pour développement de contenu vidéo
CN111475675A (zh) * 2020-04-07 2020-07-31 深圳市超高清科技有限公司 视频处理系统
CN111475676A (zh) * 2020-04-07 2020-07-31 深圳市超高清科技有限公司 视频数据处理方法、系统、装置、设备及可读存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050286759A1 (en) * 2004-06-28 2005-12-29 Microsoft Corporation Interactive viewpoint video system and process employing overlapping images of a scene captured from viewpoints forming a grid
EP2161925A2 (fr) * 2008-09-07 2010-03-10 Sportvu Ltd. Procédé et système pour fusionner des flux vidéo
CN105227837A (zh) * 2015-09-24 2016-01-06 努比亚技术有限公司 一种图像合成方法和装置
CN106131581A (zh) * 2016-07-12 2016-11-16 上海摩象网络科技有限公司 混合图像的全景视频制作技术
WO2019183235A1 (fr) * 2018-03-21 2019-09-26 Second Spectrum, Inc. Procédés et systèmes de reconnaissance de motif spatiotemporel pour développement de contenu vidéo
CN111475675A (zh) * 2020-04-07 2020-07-31 深圳市超高清科技有限公司 视频处理系统
CN111475676A (zh) * 2020-04-07 2020-07-31 深圳市超高清科技有限公司 视频数据处理方法、系统、装置、设备及可读存储介质

Also Published As

Publication number Publication date
CN115150563A (zh) 2022-10-04

Similar Documents

Publication Publication Date Title
US11381739B2 (en) Panoramic virtual reality framework providing a dynamic user experience
US11348202B2 (en) Generating virtual reality content based on corrections to stitching errors
US10003741B2 (en) System for processing data from an omnidirectional camera with multiple processors and/or multiple sensors connected to each processor
WO2017181777A1 (fr) Procédé, dispositif, système de diffusion en continu de vidéo en direct panoramique, et appareil de commande de source de vidéo
US7777692B2 (en) Multi-screen video reproducing system
WO2021147702A1 (fr) Procédé et appareil de traitement vidéo
US7145947B2 (en) Video data processing apparatus and method, data distributing apparatus and method, data receiving apparatus and method, storage medium, and computer program
US20150124048A1 (en) Switchable multiple video track platform
JP2020519094A (ja) ビデオ再生方法、デバイス、およびシステム
US20150208103A1 (en) System and Method for Enabling User Control of Live Video Stream(s)
JP2022536182A (ja) データストリームを同期させるシステム及び方法
US11089073B2 (en) Method and device for sharing multimedia content
CN112219403B (zh) 沉浸式媒体的渲染视角度量
TWI786572B (zh) 沉浸式媒體提供方法、獲取方法、裝置、設備及存儲介質
US20140281011A1 (en) System and method for replicating a media stream
US20160241794A1 (en) Helmet for shooting multi-angle image and shooting method
JP2019514313A (ja) レガシー及び没入型レンダリングデバイスのために没入型ビデオをフォーマットする方法、装置、及びストリーム
CN109756744B (zh) 数据处理方法、电子设备及计算机存储介质
CN107835435B (zh) 一种赛事宽视角直播设备和相关联的直播系统和方法
US20190028776A1 (en) Av server and av server system
WO2022206168A1 (fr) Procédé et système de production de vidéo
KR101542416B1 (ko) 멀티앵글영상서비스 제공 방법 및 시스템
US20180227504A1 (en) Switchable multiple video track platform
CN114666565B (zh) 多视角视频播放方法、装置及存储介质
CN111918092B (zh) 视频流处理方法、装置、服务器及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22778356

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22778356

Country of ref document: EP

Kind code of ref document: A1