WO2022143374A1 - Media playing method and electronic device - Google Patents

Media playing method and electronic device Download PDF

Info

Publication number
WO2022143374A1
WO2022143374A1 PCT/CN2021/140717 CN2021140717W WO2022143374A1 WO 2022143374 A1 WO2022143374 A1 WO 2022143374A1 CN 2021140717 W CN2021140717 W CN 2021140717W WO 2022143374 A1 WO2022143374 A1 WO 2022143374A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
media file
file
media
electronic device
Prior art date
Application number
PCT/CN2021/140717
Other languages
French (fr)
Chinese (zh)
Inventor
王灿
马明刚
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022143374A1 publication Critical patent/WO2022143374A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Definitions

  • the present application relates to the technical field of video processing, and in particular, to a media playback method and electronic device.
  • the common video file formats are MP4, AVI, etc.
  • the video files of these formats only support the playback of one video at a certain time, and cannot support the simultaneous playback of multiple videos. Therefore, it cannot meet the viewing needs of users in all directions and from multiple viewing angles. For example, in a live game scene, based on the existing format of video files, it is impossible to play the video of the real-time game and the video of the highlight replay at the same time. When playing a wonderful replay video, if the game is still in progress, it is easy to cause users to miss some real-time game pictures, and the user's viewing experience is poor.
  • the present application provides a media playback method and an electronic device, so that the electronic device can play multiple video channels at the same time, so as to satisfy the user's requirement for watching multiple channels of videos at the same time, and improve the user experience.
  • the first aspect is a playback method according to an embodiment of the present application, which is applied to an electronic device, and specifically includes: the electronic device detects a first user input for a first media file or a first control; The device plays the first media file; when the first media file is played to the first position, the electronic device plays the second media file and continues to play the first media file; wherein the first position is preset.
  • the electronic device can play multiple media files at the same time. Therefore, in the case where the media file is a video file, it is helpful to meet the user's requirement of being able to watch videos from multiple viewing angles at the same time, thereby improving the user experience.
  • the electronic device includes a display screen; the first media file and the second media file are played on the display screen; both the first media file and the second media file are video files. Thereby, it is convenient for users to watch.
  • the electronic device includes a speaker; the first media file and the second media file are played on the speaker; and both the first media file and the second media file are audio files.
  • the electronic device is connected to the headset; the first media file and the second media file are played on the headset; both the first media file and the second media file are audio files.
  • the electronic device includes a display screen and a speaker, and the first media file is played on the display screen; the first media is a video file; the second media file is played on the speaker; and the second media file is an audio file.
  • the electronic device includes a display screen, and the electronic device is connected to a headset, and the first media file is played on the display screen; the first media file is a video file; the second media file is played on the headset; The second media file is an audio file.
  • the electronic device includes a display screen, and the electronic device is connected to a headset, and the first media file is played on the headset; the first media file is an audio file; the second media file is played on the display screen; The second media file is a video file.
  • the electronic device includes a display screen and a speaker, and the first media file is played on the speaker; the first media file is an audio file; the second media file is played on the display screen; the second media file is a video file .
  • the electronic device may play the second media file and continue to play the first media file when the first media file is played to the first position in the following manner:
  • the display screen of the electronic device is divided into a first screen and a second screen, the second media file is played on the second screen, and the first media file is continued to be played on the first screen;
  • both the first media file and the second media file are video files. So as to facilitate implementation.
  • the electronic device may play the second media file and continue to play the first media file when the first media file is played to the first position based on the following methods:
  • both the first media file and the second media file are video files. Not only is the implementation convenient, but also the impact on viewing the first picture corresponding to the first media file is reduced.
  • the first position includes one of the following: a preset playback time point in the first media file, a preset playback frame in the first media file, and a preset playback ratio of the first media file.
  • the first position is preset by the user. Thereby, the flexibility of setting the first position is improved.
  • extension information of the first media file is associated;
  • the extension information of the first media file includes a first media file identifier;
  • the first media file identifier is used to identify the first media file;
  • the first media file extension information is associated with the second media file extension information;
  • the second media file extension information includes the second media file identifier and the second media file playback position identifier;
  • the second media file identifier is used to identify the second media file;
  • the media file playback position identifier is used to identify the first playback position information of the first media when the second media starts to play;
  • the first playback position information includes the first position.
  • the first media file includes media extension information
  • the media extension information includes first media file extension information and second media file extension information.
  • the electronic device continues to play the first media file; and the electronic device only plays the first media file.
  • the electronic device plays the third media file, and the electronic device continues to play the first media file; wherein the second position is preset; the second The position is after the second media file has finished playing.
  • the second position includes one of the following: a preset playback time point in the first media file, a preset playback frame in the first media file, and a preset playback ratio of the first media file.
  • the first media file extension information is also associated with third media file extension information;
  • the third media file extension information includes a third media file identifier and a third media file playback position identifier;
  • the third media file identifier is used to identify the third media file;
  • the third media file playback position identifier is used to identify the second playback position information of the first media when the third media starts to play;
  • the second playback position information includes the second position.
  • the first media file further includes extension information of the third media file.
  • the extension information of the first media file is mmmw with the first multimedia multiplexing information
  • the extension information of the second media file is the second mmmw
  • the extension information of the third media file is mmmw.
  • the information is the third mmmw, the first mmmw, stsd, stts, stsc, stsz, stss, and stco corresponding to the first mmmw are located in the first media stream box stb1, the second mmmw, the The stsd, stts, stsc, stsz, stss, and stco corresponding to the second mmmw are located in the second media stream box stb1, and the third mmmw, stsd, stts, stsc, stsz, stss, and stco corresponding to the third mmmw are located in In
  • a second aspect is a method for acquiring a video file according to an embodiment of the present application, applied to an electronic device, where the electronic device includes a display screen; specifically, it includes:
  • the display screen displays a first interface of the first application;
  • the first interface includes a master video file setting control, a master video file preview frame, a first slave video file setting control, a first slave video file preview frame, a first association set controls and finish controls;
  • the display screen in response to a first user input to the main video file setting control, displays a second interface of the first application; the second interface includes a first video file and a second video file;
  • the main video file preview box displays a preview static image or a preview dynamic image of the first video file
  • the electronic device displays the second interface in response to a third user input to the first slave video file settings control
  • the first slave video file preview frame displays a preview static picture or a preview dynamic picture of the second video file
  • the display screen In response to a fifth user input to the first association setting control, the display screen displays an association setting frame; the association setting frame is used to set the first association setting frame when the main video file is played to the first position, the first association setting frame A slave video file starts playing, and the playback of the first slave video file will not pause or stop the playback of the master video file; the association setting box includes a first position input box, a first confirmation control and a second confirmation control;
  • the first position After receiving the sixth user input to the first position input box, and after receiving the seventh user input to the first confirmation control, the first position is set;
  • the third video file is acquired.
  • users can integrate multiple video files into one video file according to their own needs, so that the electronic device can play multiple videos simultaneously based on one video file.
  • the electronic device in response to the operation on the third video file, may play the first video file; when the first video file is played to the first position, the electronic device plays the second video file, and Continue to play the first video file.
  • a third aspect is an electronic device according to an embodiment of the application, and the electronic device includes modules/units for executing the above-mentioned first aspect or any possible design method of the first aspect; these modules/units can be implemented by hardware It can also be implemented by hardware executing corresponding software.
  • a fourth aspect is an electronic device according to an embodiment of the present application, the electronic device includes modules/units for performing the above-mentioned second aspect or any possible design method of the second aspect; these modules/units can be implemented by hardware It can also be implemented by hardware executing corresponding software.
  • a fifth aspect is a media playback device according to an embodiment of the application
  • the media playback device includes a memory, a processor, and a computer program
  • the computer program is stored in the memory, and when the computer program is executed by the At the time, the media playback apparatus is caused to execute the first aspect of the embodiments of the present application and any possible design technical solutions of the first aspect.
  • the media playback device is an electronic device, or the media playback device is a chip.
  • a sixth aspect is a media playback device according to an embodiment of the application, the media playback device includes a memory, a processor, and a computer program, the computer program is stored in the memory, and when the computer program is executed by the At the time, the media playback device is caused to execute the second aspect of the embodiments of the present application and any possible technical solutions of the second aspect.
  • the media playback device is an electronic device, or the media playback device is a chip.
  • a seventh aspect is a computer-readable storage medium according to an embodiment of the present application.
  • the computer-readable storage medium includes a computer program.
  • the computer program runs on an electronic device, the electronic device is made to perform the above-mentioned first aspect. and any possible design technical solutions of the first aspect thereof.
  • An eighth aspect is a computer-readable storage medium according to an embodiment of the application, the computer-readable storage medium includes a computer program, and when the computer program runs on an electronic device, the electronic device is made to perform the above-mentioned second aspect and any possible design technical solutions of the second aspect thereof.
  • a ninth aspect is a computer program product according to an embodiment of the present application, which, when running on a computer, enables the computer to execute the technical solutions of the first aspect and any possible designs of the first aspect.
  • a tenth aspect is a computer program product according to an embodiment of the present application, which, when running on a computer, causes the computer to execute the technical solutions of the second aspect and any possible designs of the second aspect.
  • FIG. 1 is a schematic structural diagram of a video file according to an embodiment of the application.
  • FIG. 2 is a schematic structural diagram of a kind of video extension information according to an embodiment of the application.
  • FIG. 3 is a schematic diagram of a network architecture of an on-demand scenario according to an embodiment of the present application.
  • FIG. 4A is a schematic structural diagram of a video processing device according to an embodiment of the application.
  • 4B is a schematic structural diagram of a video playback device according to an embodiment of the application.
  • FIG. 5 is a schematic diagram of an interface for video file integration according to an embodiment of the present application.
  • FIG. 6 is another interface schematic diagram for video file integration according to an embodiment of the present application.
  • FIG. 7 is another interface schematic diagram for video file integration according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a video file storage location interface according to an embodiment of the application.
  • FIG. 9A is a schematic diagram of another interface for video file integration according to an embodiment of the present application.
  • 9B is a schematic diagram of another interface for video file integration according to an embodiment of the present application.
  • FIG. 10 is a schematic interface diagram of a video playback according to an embodiment of the application.
  • FIG. 11 is a schematic interface diagram of another video playback according to an embodiment of the application.
  • FIG. 12 is a schematic interface diagram of another video playback according to an embodiment of the application.
  • FIG. 13 is a schematic diagram of another interface according to an embodiment of the application.
  • FIG. 14 is a schematic diagram of a network architecture of a live broadcast scenario according to an embodiment of the application.
  • 15A is a schematic diagram of a playback layout according to an embodiment of the application.
  • 15B is a schematic diagram of another playback layout according to an embodiment of the present application.
  • 16 is a schematic interface diagram of another video playback according to an embodiment of the application.
  • FIG. 17 is a schematic structural diagram of a video playback apparatus according to an embodiment of the present application.
  • references in this specification to "one embodiment” or “some embodiments” and the like mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically emphasized otherwise.
  • the terms “including”, “including”, “having” and their variants mean “including but not limited to” unless specifically emphasized otherwise.
  • the term “connected” includes both direct and indirect connections unless otherwise specified. "First” and “second” are only for descriptive purposes, and cannot be understood as indicating or implying relative importance or implying the number of indicated technical features.
  • words such as “exemplarily” or “for example” are used to represent examples, illustrations or illustrations. Any embodiment or design described in the embodiments of the present application as “exemplarily” or “such as” should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as “exemplarily” or “such as” is intended to present the related concepts in a specific manner.
  • the present application provides a media playback method, which enables an electronic device to simultaneously play multiple media, such as a first media and a second media, according to one media file.
  • the media is video
  • the requirement of enabling users to watch multiple video channels at the same time helps to improve user experience.
  • the media may be video, audio, images, animations, etc.
  • the media files may be files such as video files, audio files, image files, and the like.
  • the following describes the media playback method of the embodiment of the present application by taking the media file as a video file as an example.
  • the media file is an audio file or an image file, reference may be made to the related introduction of the video file.
  • the video file can support the simultaneous playback of multiple videos, so as to satisfy the user's requirement for watching multiple videos at the same time.
  • the video extension information is used to indicate the association relationship of at least two channels of video.
  • the video extension information of the video file is used to indicate the association between the video of the real-time game and the video of the highlight replay.
  • the video playback device can simultaneously play the video of the real-time game and the video of the highlight replay based on the video file.
  • the video extension information is used to indicate the association relationship between the first channel of video and the second channel of video.
  • the video extension information includes extension information of the first video channel and extension information of the second video channel.
  • the extended information of the first channel video includes a main video identifier; the main video identifier is used to identify the first channel video as the main video.
  • the extended information of the second channel video includes the slave video identifier and the associated identifier played from the slave video.
  • the slave video identifier is used to indicate that the second video is a slave video; the associated identifier of the slave video playback can also be called a slave video playback position identifier, which is used to indicate that the slave video starts to play when the master video is played to the first playback position.
  • the frame, time, scale, etc. may be used to mark.
  • the first playback position is the frame, it can be set to start playing the secondary video when the main video reaches the 10th frame; the first playback position is the time, and it can be set to start playing when the main video reaches 1 minute and 20 seconds. Slave video; the first playback position is the ratio, which can be set to start playing the slave video when the main video is played to 1/3.
  • the video extension information may be used to indicate the association relationship of multiple video channels.
  • multiple slave videos may be referred to as first slave video, second slave video...etc.
  • only part of the slave videos may have a direct relationship with the master video, and the remaining slave videos may have a direct relationship with the above-mentioned part of the slave videos.
  • the video extension information is used to indicate the association relationship between the first channel of video, the second channel of video, and the third channel of video.
  • the video extension information includes extension information of the first video channel, extension information of the second video channel and extension information of the third video channel.
  • the extended information of the first video includes the main video identifier
  • the extended information of the second video includes the associated identifier of the slave video identifier and the second slave video
  • the extended information of the third video includes the slave video identifier and the third slave video.
  • the associated identifier is used to identify that when the primary video is played to the first playback position, the secondary secondary video starts to be played.
  • the associated identifier of the third slave video playback is used to identify that when the second slave video is played to the second playback position, the third slave video starts to play. It can be understood that, the associated identifier of the third slave video playback can also be used to identify that when the main video is played to the third playback position, the third slave video starts playing. It should be noted that the first playback position and the third playback position may be the same or different, which is not limited.
  • the video extension information can also be used to indicate the association relationship between the multiple channels of slave videos and the master video respectively.
  • the video extension information is used to indicate the relationship between the first channel of video and the second channel of video, and the relationship between the first channel of video and the third channel of video.
  • the second video extension information includes the associated identification from the video identification and the playback from the video, and the associated identification played from the video is used for
  • the associated identifier for playing the slave video may be the frame number (eg, sample index) of the main video. That is, the main video frame serial number included in the extended information of the second video, when the main video frame serial number is used to identify the Nth frame of the main video, where N is a positive integer greater than or equal to 1, then the main video is played to the Nth frame. , start playing the slave video.
  • the Nth frame of the first channel of video is played simultaneously with the first frame of the second channel of video
  • the N+1th frame of the first channel of video is played simultaneously with the second frame of the second channel of video
  • the Nth of the first channel of video is played at the same time. +2 frames are played at the same time as the 3rd frame of the second channel video, and so on, until the second channel video playback ends, the second channel video will not be played, if the first channel video is still not over, the first channel will continue to be played video.
  • the associated identifier for playing the slave video may also use other information to identify the playback position of the master video when the slave video starts to be played.
  • the starting playback time of a certain frame in the master video, the starting playback time of a certain frame in the slave video, the frame identification of the slave video, etc. are not limited thereto.
  • Video files are divided into two types based on whether the simultaneous playback of at least two channels of video is supported, which are the first type of video files and the second type of video files respectively.
  • the first type of video file can only support playing one video.
  • the second type of video file supports playing at least two channels of video at the same time.
  • the first type of video file does not include video extension information, and reference may be made to the regulations on video files in the existing protocol, which will not be repeated here.
  • the second type of video file includes video extension information.
  • the second type of video file in this embodiment of the present application may include video metadata, video data, and audio data.
  • the video metadata is description information of the video data and the audio data, and is used for indexing to the corresponding video data and audio data.
  • the video metadata includes video extension information.
  • the video extension information is used to describe the association relationship of multiple video channels.
  • the video metadata may further include the video type (main video, or secondary video) and the like.
  • Video data is used to describe the images in the video and can be a sequence of images. For a certain second type of video file, it may include video data of one or more channels of video.
  • Audio data is used to describe the sound in the video and can be understood as digitized sound data.
  • audio data of one or more channels of video may also be included.
  • a second-type video file includes video metadata, video data of N-channel video, and audio data of M-channel video.
  • N and M are positive integers, and the values of N and M may be the same or different.
  • the values of N and M are the same, and the audio data of the M-channel video corresponds to the video data of the N-channel video respectively.
  • the values of N and M are different, N is greater than 1, and M is equal to 1.
  • the audio data in the video file can correspond to the video data of a certain channel of the N videos, such as the video data of the main video. , or some video data from a video.
  • the audio data of the main video may include audio data of one or more audio tracks.
  • the audio data of the main video includes audio data of two audio tracks, wherein the audio data of one audio track is used to describe the sound in a certain language (such as Chinese) in the video, and the audio data of the other audio track is used to describe the sound in a certain language (such as Chinese) in the video.
  • the voice in another language such as English
  • the user can choose to play the sound in Chinese or English as needed when playing the main video.
  • an MP4 file includes ftyp (file type box), mdata (media data box), and moov (movie box).
  • ftyp includes video file format information, where the video file format information included in ftyp is used to identify the video file format as MP4.
  • mdata is used to store media data.
  • video data and audio data in the embodiments of the present application may be referred to as media data.
  • video data and audio data may be divided based on chunk-samples, respectively, to facilitate searching.
  • mdata stores media data in chunks.
  • a chunk may include one or more samples, and a sample is the smallest storage unit of media data (video data or audio data).
  • moov contains video metadata.
  • moov is a container box that contains video metadata that can be interpreted by corresponding sub-boxes.
  • moov includes mvhd (movie header box) and at least one trak (track box).
  • mvhd is used to store information such as creation time, modification time, duration, and recommended playback rate of the MP4 file.
  • trak is used to store related description information of video data or audio data.
  • the trak is a container box, which may include tkhd (track header box) and mdia (media box), etc.
  • tkhd contains the overall information of the media data (audio data or video data) of a video, such as track id, video duration, etc.
  • track id is used to identify a video.
  • mdia is a container box, which can include mdhd (media header box), hdlr (handler reference box) and minf (media information box).
  • mdha is used to define time scale, etc.
  • hdlr contains information related to the video playback process, such as data type (audio, video, etc.).
  • minf is a container box, which can include stb1 (sample table box).
  • stb1 includes stsd(sample description box), stts(time to sample box), stsc(sample to chunk box), stsz(sample size box), stss(sync sample box), stco(chunk offset box), etc.
  • stsd contains the encoding type, width, height, length, audio channel, sampling and other information of the video.
  • stts contains the timing mapping information of the sample, such as sample index, time offset corresponding to the sample index, etc.
  • stsc contains the mapping relationship between sample and chunk.
  • stss may contain a randomly accessible list of sample indices, each sample index in the list of sample indices being used to indicate a keyframe.
  • a key frame can be understood as a video frame that carries a full amount of data, and can be independently decoded into a video frame without referring to a previous video frame.
  • Non-key frames can be understood as video frames of incremental data, and it is necessary to refer to other frames to decode a frame of video. Only keyframes have random access.
  • stco is used to define the position of each chunk in the media data, for example, stco contains the offset of each chunk in the media data.
  • the field mmmw (multimedia multiway) is added to the stb1 in the trak used to describe the video data, thereby Add the corresponding video extension information to the MP4 file.
  • mmmw includes extended information of one video.
  • the extension information included in mmmw in the corresponding stb1 may be different.
  • the mmmw in the corresponding stb1 may include the main video identifier.
  • the mmmw in the corresponding stb1 may also include the track id of the audio data corresponding to the main video.
  • the mmmw in the corresponding stb1 can include the associated identification from the video identification and the playback from the video, and the associated identification from the video playback is used to identify the start of playing from the video. the playback position of the main video.
  • the associated identifier for playback from video could be sample index.
  • the mmmw in the corresponding stb1 may include the track id of the audio data corresponding to the slave video.
  • mmmw is a sub-box of stb1, which can also be understood as a box. Its structure can be shown in Figure 2, including the following fields: box size, box type, version, flags, role, audio track id, sample index.
  • box size can take up 4 bytes to indicate the size of the box
  • box type can take up 4 bytes to indicate the type of the box
  • version can take up 1 byte to indicate the version of the box
  • flags It can occupy 3 bytes, which is used to extend the flag bits of some functions
  • the role can occupy 4 bytes, which is used to identify the video type (main video or slave video)
  • the audio track id can occupy 4 bytes, which is used to identify the audio data
  • the sample index can occupy 4 bytes and is used to identify the playback position of the main video when the video starts to play.
  • the structure of mmmw can be the same for the master video and the slave video.
  • sample index and audio track id can be special identifiers or empty.
  • the special identifiers are used for Indicates that the field is invalid.
  • the audio track id can be a special identifier or empty.
  • the structure of mmmw can be different for master and slave video.
  • the sample index and audio track id in mmmw can be optional fields
  • the audio trcak id can be optional fields.
  • the following fields may not be included in mmmw: sample index or audio track id.
  • the audio track id may not be included in the mmmw.
  • FIG. 2 is only an example of a structure of mmmw, and does not constitute a limitation to the embodiments of the present application.
  • role can also occupy two bytes and so on.
  • the extended information field used for carrying video is mmmw, and the embodiment of this application does not limit the name of the extended information field used to carry video.
  • the field of the extended information may also be referred to as a video extension field or the like.
  • the video playback device in the embodiment of the present application is an electronic device, such as a portable terminal.
  • Portable terminals such as mobile phones, tablet computers, notebook computers, etc.
  • the video playback device can respond to a video application installed on the portable terminal (such as ) to play the corresponding video.
  • the video playback device may also be a non-portable terminal, such as a smart screen, a desktop computer, or a television.
  • the video playing device is a TV
  • the corresponding video can be played in response to the user's operation of selecting a certain channel.
  • the video playback device may also support the integration of video files.
  • the mobile phone integrates at least two first-type video files into a
  • the second type of video file is saved locally and/or uploaded to the video server, so that the mobile phone can play at least two channels of video simultaneously based on the second type of video file.
  • the above-mentioned at least two video files of the first type are pre-recorded video files, which can be stored on a video playback device, or can be stored on a network disk or a server (such as a video server).
  • the storage location of video files is not limited.
  • the mobile phone saves the second-type video file. For example, the mobile phone saves the second video file locally and/or uploads it to a server or a network disk.
  • the video processing device in the embodiment of the present application is an electronic device, such as a portable terminal.
  • the portable terminal may be a mobile phone, a tablet computer, a notebook computer, or the like.
  • the video processing device in this embodiment of the present application may also be a non-portable terminal, such as a smart screen, a desktop computer, or a television, etc., which is not limited thereto.
  • the video processing device and the video playback device may be the same device or different devices, which are not limited.
  • the video processing device may also be a video server or the like, and the video server in this embodiment of the present application may be a cloud server or a local server, which is not limited.
  • the video processing device has a video integration function, which supports integrating at least two first-type video files into one second-type video file.
  • the video playback method in the embodiment of the present application can be applied to a VOD scenario or a live broadcast scenario.
  • the following describes the video playback method according to the embodiment of the present application with reference to specific application scenarios.
  • FIG. 3 shows a network architecture of an on-demand scenario, including a video server and a video playback device.
  • the video server is configured to receive a video acquisition request from a video playback device, and send a corresponding video file to the video playback device.
  • the video playback device is used to receive an operation of a user to play a certain video, and in response to the above operation, send a video acquisition request to a video server, receive a video file from the video server, and play correspondingly according to the video file.
  • the video playback device sends a video acquisition request to the video server in response to the user's operation on a certain video option in a video application (such as Huawei Video) installed by itself, and receives a video from the video server in response to the video acquisition request. file, and based on the video file, play the video.
  • the video playback device can play the multi-channel video according to the video file.
  • the video playback device can adapt to the corresponding video playback layout according to the video extension information, and perform multi-channel video playback.
  • the video playback device plays the secondary video in the small window and the main video in the large window according to the video extension information. Take the example that the video extension information is used for the association between the first channel of video and the second channel of video.
  • the video extension information includes the extension information of the first video channel and the extension information of the second video channel
  • the extension information of the first channel video includes the main video ID
  • the extension information of the second video channel includes the slave video ID and the slave video playback information.
  • the associated identifier where the main video frame identifier is used to identify the Nth frame in the main video, the video playback device displays a large window according to the video extension information, and plays the first video in the large window. At the Nth frame, continue to play the first video in the large window, display the small window, and play the second video in the small window. After the second video is played, the small window can be hidden.
  • the small window Before the first channel video is played to the Nth frame, the small window may not be displayed.
  • the layout of the large window and the small window can be preset by the user or the system default. For example, the small window can be suspended on the large window and laid out in the form of picture-in-picture. The position of the widget needs to be moved, or the position of the widget can also be fixed. For another example, the small window and the large window may also be tiled, that is, the first channel video and the second channel video are displayed in a split screen.
  • the size of the large window before the Nth frame of the first channel video is played and the size of the large window when the Nth frame of the first channel video is played can be Does not change.
  • the size of the large window before the Nth frame of the first channel of video is played is the same as the Nth frame of the first channel of video played.
  • the size of the large window at frame time may be different, and the size of the large window may be larger than the size of the large window when the Nth frame of the first channel video is played.
  • the video playback device will Video extension information, the first video is played in the first window, the second video is played in the second window, and the third video is played in the third window.
  • the size of the first window is larger than the size of the second window.
  • the size of a window is larger than the size of the third window, the size of the second window and the size of the third window can be the same or different, which is not limited, wherein the second window and the third window can be suspended on the first window, Or the first window, the second window and the third window are tiled layout and so on.
  • the network architecture shown in FIG. 3 may further include a video processing device, wherein the video file may be generated by the video processing device and uploaded to the video server.
  • the video processing device integrates at least two video files of the first type into a video of the second type in response to the user's operation based on a self-installed application with a video integration function (such as a camera application, a gallery application, or a video application, etc.).
  • a video integration function such as a camera application, a gallery application, or a video application, etc.
  • the video processing device and the video playback device may be the same device or different devices, which are not limited.
  • the video processing device and the video playback device are the same device
  • the video processing device integrates at least two first-type video files into one second-type video file
  • the The second type of video file is stored locally, that is, in its own internal storage, or in an external storage connected to itself. If the video processing device and the video playback device are the same device, the video playback device can also play the video based on the second type of video file in response to the user's opening operation on the local second type of video file.
  • the second-type video file can also be uploaded to a network disk or a video server for easy storage.
  • the video file may also be generated by a video server.
  • the video processing device sends a video file integration request to the video server in response to the user's operation based on the application with the video integration function installed by itself, wherein the video file integration request includes at least two first type video files and the at least two video files.
  • the association relationship between the videos corresponding to the first type of video files respectively.
  • the video server receives the video file integration request from the video processing device, and according to the association relationship between the at least two first-type video files in the video file integration request and the videos corresponding to the at least two first-type video files, at least The two first video files are integrated into one second type video file.
  • FIG. 4A shows a structure of a video processing device according to an embodiment of the present application, including an acquisition module 401A, an encoding module 402A, an association module 403A, and an encapsulation module 404A. Further, in some embodiments, the video processing device may further include a clipping module 405A.
  • the obtaining module 401A is used to obtain L first-type video files.
  • L is a positive integer greater than or equal to 2.
  • the L video files of the first type are pre-recorded video files, which can be stored on a video processing device, or on a network disk or server, which is not limited.
  • the obtaining module 401A is configured to receive a first operation of a user selecting a video file based on an application with a video integration function, and in response to the first operation, obtain a first type of video file corresponding to the main video.
  • the obtaining module 401A is further configured to receive a second operation of the user selecting a video file based on an application with a video integration function, and in response to the second operation, obtain a first type of video file corresponding to the slave video.
  • the value of L is related to the number of the first-type video files corresponding to the secondary video selected by the user.
  • the acquisition module 401A is configured to receive a video file integration request, and obtain L first-type video files from the video file integration request, wherein the L first-type video files include a first-type video file corresponding to a main video. and L-1 slave videos corresponding to the first category of video files.
  • the encoding module 402A is configured to encode and compress the videos corresponding to the above-mentioned L video files of the first type respectively, so as to obtain L standard video streams.
  • the association module 403A is used to generate video extension information, where the video extension information is used to indicate the association relationship between the videos corresponding to the W first-type video files respectively.
  • the W first-type video files are video files in the above-mentioned L first-type video files, 2 ⁇ W ⁇ L, and W is a positive integer.
  • the association module 403A may generate the video extension information in response to the user completing the operation of establishing the association relationship between the videos corresponding to the above-mentioned W first-type video files respectively.
  • the association module 403A may also generate video extension information according to the association relationship of the videos corresponding to the L first-type video files in the video file integration request.
  • the encapsulation module 404A is configured to obtain a second-type video file according to the W standard video streams and video extension information.
  • the editing module 405A is used to filter out the W first-type video files from the L first-type video files.
  • the editing module 405A is configured to filter out W first-type video files from L first-type video files in response to an operation in which the user selects W first-type video files to establish a video association relationship from L first-type video files. video file.
  • FIG. 4B shows a structure of a video playback device according to an embodiment of the present application, including an acquisition module 401B, a decapsulation module 402B, a decoding module 403B, an association module 404B, and a playback module 405B.
  • the acquiring module 401B is used to acquire the second type of video files.
  • the obtaining module 401B is configured to obtain the second type of video file in response to receiving an operation that the user selects to play a certain video.
  • the decapsulation module 402B is used to decapsulate the second type of video files to obtain video metadata, media data of multi-channel videos (such as video data of P-channel video and audio data of Q-channel video, P and Q are positive integers, and P ⁇ 2, 1 ⁇ Q ⁇ P, the Q channel video is one or more channels of the P channel video).
  • media data of multi-channel videos such as video data of P-channel video and audio data of Q-channel video, P and Q are positive integers, and P ⁇ 2, 1 ⁇ Q ⁇ P, the Q channel video is one or more channels of the P channel video.
  • the audio data of one or more audio tracks may be included.
  • the audio data of the video channel may include audio data of the Chinese audio track and audio data of the English audio track.
  • the decoding module 403B is used for decoding the media data of the multi-channel video. Taking the media data of the multi-channel video as the video data of the P-channel video and the audio data of the Q-channel video as an example, the decoding module 403B is configured to decode the video data of the P-channel video and the audio data of the Q-channel video. For example, take the audio data of a certain channel of video including the audio data of the Chinese audio track and the audio data of the English audio track as an example, in this case, the decoding module 403B can set the audio data of the corresponding audio track to decode.
  • the decoding module 403B decodes the audio data of the Chinese audio track, so that when the video is played, the sound is played in Chinese.
  • the sound setting of the video may be set by the user according to the user's own needs, or may be the default, which is not limited.
  • the association module 404B is configured to obtain the frame association information of the multi-channel video according to the video extension information in the video metadata.
  • the playing module 405B is configured to play the corresponding video according to the frame association information of the multi-channel video.
  • the playback module 405B is used to determine the number of video playback windows according to the frame associated information of the multi-channel video, and to adapt the layout of the corresponding video playback windows according to the number of video playback windows, and then, in the corresponding video playback The video is played in the window.
  • the layout of the video playback windows corresponding to the number of video playback windows may be set by the user according to their own needs, or may be defaulted by the system, which is not limited.
  • the embodiment of the present application integrates at least two first-type video files into one second-type video file.
  • the first application having the video integration function in the embodiment of the present application may be implemented by adding a video integration function to native applications such as a gallery application function, a video application, and a camera application.
  • the first application having the video integration function in the embodiment of the present application may also be a third-party application, which is not limited.
  • a third-party application may be understood as an application downloaded by a user from an application market APP or a network to a mobile phone according to his/her own needs.
  • the mobile phone displays an interface 500 , and the interface 500 includes an icon 501 .
  • Icon 501 is used to identify the first application.
  • the mobile phone displays the interface of the first application.
  • the interface of the first application may be interface 510 , including video preview box 01 , video preview box 02 , and video preview box 03 , option 502 , option 503 , and option 504 .
  • the video preview frame 01 is used to preview the main video
  • the video preview frame 02 is used to preview the slave video 1
  • the video preview frame 02 is used to preview the slave video
  • the option 502 is an option corresponding to the video preview frame 01, which is used to select the corresponding video of the main video.
  • the first type of video file is an option corresponding to the video preview frame 02, used to select the first type of video file corresponding to the video 1
  • option 504 is an option corresponding to the video preview frame 03, used to select from the video 2.
  • the corresponding first category video file the user can increase or delete the number of video preview frames of the slave video on the interface of the first application according to his own needs. It should be noted that the number of video preview boxes on the interface of the first application does not exceed the maximum number of video files supported by the first application for video integration, and the number of video preview boxes on the interface of the first application At least 1.
  • the maximum number of the first type of video files used for video integration may be predefined by the user according to their own needs, or may be preset by the R&D personnel during program development, which is not limited.
  • the first application supports a maximum of four first-type video files to be integrated into one second-type video file.
  • the interface of the first application includes at most four video preview boxes, one of which is a video preview box. Used to preview the master video, and the other 3 video preview boxes are used to preview the slave video respectively.
  • Interface 600 in response to a user clicking on option 502, interface 600 is displayed.
  • Interface 600 includes options for at least two first category video files, such as option 601 , option 602 , option 603 , and option 604 .
  • the option 601, the option 602, the option 603 and the option 604 are respectively used to identify a first type of video file.
  • the first type of video files respectively identified by option 601, option 602, option 603 and option 604 may be stored on the mobile phone, or may be stored on a network disk or server, which is not limited.
  • the mobile phone returns to interface 510, and displays the video corresponding to the first type of video file identified by option 601 in the video preview area 01.
  • interface 510 includes option 701 for setting the playback position of the main video when video 1 starts playing.
  • the mobile phone displays a prompt box 710, where the prompt box 710 is used to prompt the user to set the playback position of the main video when the video 1 starts playing.
  • prompt box 710 includes option 711 , option 712 , option 713 , and option 714 .
  • Option 711 is used to set the specific playback position information of the main video
  • option 712 is used to set the unit for identifying the playback position of the main video, such as frame, second, hour, minute, etc.
  • Option 713 is used to cancel the setting
  • option 714 is used to confirm the setting.
  • the mobile phone sets the playback position of the main video from video 1 to the Nth frame, and in response to the operation of the user clicking option 714, the mobile phone returns to the interface 510.
  • the unit for identifying the playback position of the main video may also be other units such as milliseconds, which is not limited.
  • the mobile phone taking the Nth frame of the main video when the main video starts to play from video 1 set by the user through options 711 and 712, the mobile phone generates the sub-video 1 and the main video according to the settings of options 711 and 712. connection relation.
  • the association between the secondary video 2 and the main video set by the user is similar to the association between the secondary video 1 and the primary video.
  • the first type of video file corresponding to the secondary video Associate the setting option to set the playback position of the main video when starting from video 2.
  • the associated setting options corresponding to the slave video such as 701 may be displayed after the user selects the video file corresponding to the slave video, or may be displayed when the user does not select the video file corresponding to the slave video. It is displayed on the interface 510, which is not limited.
  • the mobile phone in response to the user's operation of clicking option 502, displays a video file storage location interface, and the video file storage location interface includes at least one video file storage location option.
  • the video file storage location interface can be the interface 800 shown in FIG. 8, including option 801, option 802 and option 803, option 801 is used to indicate that the video file storage location is a gallery application, and option 802 is used to indicate that the video file storage location is cloud disk.
  • the phone displays interface 600.
  • the mobile phone converts the first type of video file corresponding to the main video and the video file corresponding to the secondary video 1
  • the first type of video file and the first type of video file corresponding to the slave video 2 are integrated into a second type of video file, and the second type of video file includes video extension information, which is used to indicate the main video and the slave video 1.
  • the mobile phone in response to the user's operation of clicking option 900, stores the first type of video file corresponding to the main video and the first type of video file corresponding to the secondary video 1.
  • the class video files are integrated into a second class video file, and the second class video file includes video extension information, and the video extension information is used to indicate the association relationship between the master video and the slave video 1 .
  • the video extension information is used to indicate the association relationship between the master video and the slave video 1 .
  • the mobile phone integrates the first type video file corresponding to the main video and the first type video file corresponding to the slave video 1 into a second type video file, and the second type video file includes Video extension information, where the video extension information is used to indicate the association between the master video and the slave video 1.
  • the association relationship between the master video and the slave video 1 can be used to indicate the playback position of the master video when the slave video 1 starts to play.
  • the video extension information may include the extension information of the main video and the extension information of the slave video 1
  • the extension information of the main video includes the main video identifier
  • the extended information of the secondary video 1 includes a secondary video ID and a primary video frame ID
  • the primary video frame ID is used to indicate the playback position of the primary video when the secondary video 1 starts playing.
  • the option 900 shown in FIG. 9A or FIG. 9B may be displayed on the interface after the user selects a first-type video file corresponding to a master video and a first-type video file corresponding to a slave video on the interface 510 . 510, or may be displayed on the interface 510 when the displayed interface 510 is displayed in response to the user clicking the icon 501, which is not limited.
  • the mobile phone after obtaining the second type of video file, the mobile phone uploads the second type of video file to a network disk or server, or saves it locally.
  • the above description takes the integration of at least two first-type video files into one second-type video file on a mobile phone as an example. It should be noted that in this embodiment of the present application, at least two first-type video files can also be integrated.
  • the step for a second type of video file is performed by a server (eg, a video server). For example, after the user has completed setting on the interface of the first application, as shown in FIG. 9A , in response to the user's operation of clicking option 900, the mobile phone sends a video file integration request to the server, wherein the video file integration request includes the corresponding video files corresponding to the main video.
  • the first type of video file the first type of video file corresponding to the sub-video 1, the first type of video file corresponding to the sub-video 2, the playback position information of the main video when the video 1 starts playing, the main video when the video 2 starts to play.
  • Playing position information the server receives the video file integration request from the mobile phone, and according to the playing position information of the main video when the video 1 starts to play, and the playing position information of the main video when the video 2 starts to play, the first category corresponding to the main video is
  • the video file, the first type video file corresponding to the slave video 1, and the first type video file corresponding to the slave video 2 are integrated into a second type video file.
  • the above is only an example of the user selecting the main video, the first type of video file corresponding to the sub-video, and setting the playback position of the main video when the sub-video starts to play, and does not constitute a limitation to the embodiments of the present application.
  • the application embodiments do not limit the manner in which the user selects the master video, the first type of video file corresponding to the slave video, and sets the playback position of the master video when the slave video starts to play.
  • the video playback device as a mobile phone as an example, the following describes the manner in which the mobile phone plays the second type of video file in combination with a specific scenario.
  • the mobile phone displays an interface 1000 , and the interface 1000 includes an icon 1001 .
  • Icon 1001 is used to identify the gallery application.
  • the phone displays the interface of the gallery application.
  • the interface of the gallery application may be interface 1010, including option 1011, where option 1011 is used to identify a video file.
  • the mobile phone plays the corresponding video according to the video file identified by option 1011.
  • the mobile phone determines whether the video file identified by option 1011 is a second type of video file.
  • the mobile phone can determine whether the video file identified by option 1011 is the second type of video file by judging whether the video file identified by option 1011 includes video extension information. For example, if the video file identified by option 1011 does not include video extension information, the video file identified by option 1011 is the first type of video file. For another example, if the video file identified by option 1011 includes video extension information, the video file identified by option 1011 is the second type of video file. Take the video file format as MP4 as an example. The mobile phone can determine whether the video file identified by option 1011 includes mmmw. If the video file identified by option 1011 includes mmmw, the video file identified by option 1011 is the second type of video file. If the video file identified by option 1011 does not include mmmw, the video file identified by option 1011 is the first type of video file.
  • the mobile phone will play the video according to the video file identified by option 1011.
  • the specific playback method please refer to the existing method for the mobile phone to play video according to the first type of video file, which will not be repeated here. .
  • the mobile phone plays at least two channels of video according to the second type of video file.
  • the video extension information in the second type of video file identified by option 1011 as an example for indicating the association relationship between the first channel of video and the second channel of video.
  • the video extension information in the second type video file identified by option 1011 includes the extension information of the first channel video and the extension information of the second channel video
  • the extension information of the first channel video includes the main video identifier
  • the extended information includes the slave video identification and the main video frame identification, wherein the main video frame identification is used to identify the Nth frame of the main video, then in response to the operation of the user clicking option 1011, the mobile phone plays the first video in the window 1021.
  • the video of one channel reaches the Nth frame in the window 1021, the video of the second channel starts to be played in the window 1022.
  • the second channel of video plays the last frame, and when the first channel of video is played to the M+1th frame, the second channel of video playback has ended,
  • the window 1022 is hidden, and if the video of the first channel is not finished yet, the video of the first channel continues to be played in the window 1021 .
  • the mobile phone generates video associated information according to the video extension information in the second type of video file identified by option 1011, and then the mobile phone plays the corresponding video according to the video associated information.
  • the video associated information may be as shown in Table 1.
  • the video associated information may also be as shown in Table 2.
  • sample idex indicates the main video frame number
  • DTS indicates the decoding time of the main video frame
  • PTS indicates the display time of the main video frame
  • sample size indicates the frame size of the main video
  • offset indicates the position of the frame corresponding to the main video in the video file.
  • the sub track id is used to identify the slave video. For example, as shown in Table 2, when the sample index is 1.N, the sub track id is 2, that is, when the main video reaches the Nth frame, the sub-track id of 2 starts to play.
  • the display start time of the slave video with the sub track id of 2 can be set as the display start time of the Nth frame of the master video, and the display time of the slave video with the sub track id of 2 and the display time of the master video can be set. Map onto the same timeline, thereby associating sub track id 2 with frame N of the main video.
  • the mobile phone determines the layout of the video playback window according to the video extension information. For example, the mobile phone determines the number of video channels to be played according to the video extension information, then determines the number of video playback windows according to the number of video channels to be played, and then determines the number of video playback windows according to the number of video playback windows.
  • the layout of the video playback window corresponding to the number.
  • the number of video playback windows is the same as the number of video channels to be played. For example, before the first channel of video is played to N frames, only one video needs to be played on the mobile phone, so the number of video playback windows is 1, and the video playback window is used to play the first channel of video.
  • the number of video playback windows is 2, which is different from the number of video playback windows.
  • the layout of the video playback window corresponding to the number 2 can be preset by the user, or the default of the gallery application.
  • the two video playback windows are window 1021 and window 1022, and their layouts can be in a picture-in-picture manner.
  • the window 1021 is used to play the first channel of video
  • the window 1022 is used to play the second channel of video.
  • the number of video playback windows is 2, and the layout of the windows can be tiled. As shown in FIG.
  • window 1201 is used to play the first channel of video
  • window 1202 is used to play the second channel of video
  • window 1202 is located in the window.
  • the second type of video file identified by option 902 includes the audio data of the first video and the second video.
  • the mobile phone can play the corresponding sound by default according to the audio data of the main video; or the mobile phone can also play the corresponding sound according to the audio data of the secondary video by default.
  • the mobile phone plays the first video in the window 1021, and when the second video is played in the window 1022, it defaults to the audio of the first video. data, the sound of the first video is played, and the second video is played silently or silently.
  • the mobile phone plays the corresponding sound according to the audio data from the video by default, then the mobile phone plays the first video in the window 1021, and when the second video is played in the window 1022, the default is based on the second video. Audio data, play the sound of the second video, and the first video is played without sound or mute.
  • the mobile phone plays the first video in the window 1021, and plays the second video in the window 1022.
  • the video playback interface may also include a sound option for the first channel video and a sound option for the second channel video.
  • the mobile phone plays the sound of the first channel video.
  • the mobile phone plays the sound of the second channel video. This facilitates user interaction with the device.
  • the mobile phone plays the first channel of video in window 1021, and when the second channel of video is played in window 1022, according to the first channel of video.
  • the audio data of one video is played, and the sound of the first video is played.
  • the mobile phone plays the first video in the window 1021, and when the second video is played in the window 1022, according to the second video.
  • the audio data of the channel video is played, and the sound of the second channel video is played.
  • the mobile phone displays an interface 1300 , and the desktop includes icons 1301 .
  • Icon 1301 is used to identify a video application.
  • the phone displays the interface of the video application.
  • the interface of the video application may be interface 1310 , including video option 1311 .
  • the video option 1311 is used to play the video whose video name is the developer conference.
  • the mobile phone sends a video acquisition request to the video server, and the video acquisition request is used to request to acquire the video file corresponding to the developer conference.
  • the video server receives the video acquisition request from the mobile phone, and returns the video file corresponding to the developer conference to the mobile phone.
  • the mobile phone receives the video file corresponding to the developer conference from the video server, and plays the video according to the video name is the video file corresponding to the developer conference.
  • the mobile phone is the video file corresponding to the developer conference according to the video name, and the way of playing the video can refer to the above-mentioned way that the mobile phone plays the video according to the option 1311 for identifying the video file, and will not be repeated here.
  • the video playback device may respond to the user clicking the channel selection button on the remote control to trigger video playback according to the video file from the server of the corresponding channel.
  • the video playback device can also add video extension information to the video metadata, so that the video playback device can play at least two videos at the same time. Thereby, it can meet the multi-angle viewing needs of users in the live broadcast scene.
  • FIG. 14 shows a network architecture of a live broadcast scenario, including at least one camera (eg, camera 1, camera 2, etc.), a broadcast director, a streaming server, and a video playback device.
  • camera eg, camera 1, camera 2, etc.
  • broadcast director e.g., a broadcast director
  • streaming server e.g., a streaming server
  • video playback device e.g., a video playback device
  • the camera is used to collect video in real time
  • the network architecture of the live broadcast scene includes at least two cameras, and different cameras can be used to collect video from different perspectives.
  • camera 1 can be used to capture images of the real-time game
  • camera 2 can be used by the user to capture images of real-time explanations.
  • the camera 1 is located on the camera position 1, and the camera 2 is located on the camera position 2.
  • the director station is used to receive the video collected from the camera, encode the video from the camera, obtain the corresponding video metadata, video data and audio data, and upload the video metadata, video data and audio data to the push stream server, and the streaming server delivers it to the corresponding video playback device.
  • the video playback device decodes the video data and the audio data according to the video metadata, and plays the corresponding video.
  • the director station performs processing such as encoding the video from the camera according to the user's settings. For example, if the user sets to only broadcast the video from camera 1 on one camera position (eg camera position 1), the director generates video metadata according to the user's settings, and the video metadata is used to indicate the camera 1 on camera position 1. For details, please refer to the introduction about video metadata in the existing protocol. And the director station encodes the video from camera 1 on camera 1, obtains video data and audio data, and uploads video metadata, video data and audio data to the streaming server. If the user's settings have not changed, the director only needs to encode the video from camera 1 on camera 1 to obtain video data and audio data, and upload the video data and audio data to the streaming server. , without regenerating video metadata.
  • processing such as encoding the video from the camera according to the user's settings. For example, if the user sets to only broadcast the video from camera 1 on one camera position (eg camera position 1), the director generates video metadata according to the user's settings, and the video metadata
  • the video playback device determines whether the video metadata includes video extension information. If the video metadata does not include video extension information, the video playback device determines that the video needs to be played.
  • the number of video playback windows is 1, and the video data and audio data are decoded according to the video metadata, and after decoding, the corresponding video is played according to the layout corresponding to the number of video playback windows being 1.
  • a layout corresponding to 1 video playback window may be full screen or not, may be set according to the user, or may be default, which is not limited.
  • the broadcast director determines to update the live broadcast of one video channel to the live broadcast of two or more channels of video. Take the example of updating one channel of live video to two channels of live video. For example, one video is the video from the camera at camera position 1, and the other video is the slow playback video obtained based on the video collected by the camera located at camera position 1 in a certain segment, then the director updates the video metadata, video metadata Including video extension information, and uploading the updated video metadata, video data and audio data of the above two channels of video to the streaming server, and then the server will deliver it to the video playback device.
  • the video playback device can play two-channel videos according to the updated video metadata.
  • the updated video extension information is used to indicate the association relationship between the slow playback video and the video from camera 1 of camera 1 .
  • the extended video information includes extended information of the video from camera 1 of camera 1 and extended information of the slow playback video
  • the extended information of the video from camera 1 of camera 1 includes the main video identifier
  • the extended information of the slow playback video includes The secondary video identification and the main video frame identification, wherein the main video frame identification is used to identify the playback position of the video from the camera of camera position 1 when the slow playback video starts to be played.
  • the video playback device can determine whether the video metadata includes video extension information.
  • the number of channels, and the number of video playback windows is determined according to the number of channels of the video to be broadcast live.
  • the number of video playback windows is the same as the number of video channels to be broadcast live.
  • the video playback device determines a layout corresponding to the number of video playback windows according to the number of video playback windows.
  • the video playback device plays the above two-channel video according to the layout corresponding to the number of video playback windows.
  • the layout corresponding to the number of video playback windows may be preset by the user on the video playback device, or may be a default, which is not limited.
  • the main video is played in window 1501A, and the secondary video is played in window 1502A.
  • the main video is played in window 1501B, and the secondary video is played in window 1502B.
  • the director station re-updates the video metadata, and uploads the re-updated video metadata to the streaming server, and the streaming server sends it to the video playback device.
  • the video playback device receives the re-updated video metadata, it determines whether the re-updated video metadata includes the video extension information. If it does not include the video extension information, the video playback device determines to play only one video, and one video playback window corresponds to the video. layout and play the corresponding video.
  • the above only takes the two-channel live video as an example of the video from camera 1 of camera 1 and the slow playback video.
  • the video of camera 1 of camera 1, one channel is the video of camera 2 of camera 2, which is not limited.
  • the embodiment of the present application is not limited to live broadcast of two channels of video, and can also live broadcast of three or more channels of video at the same time.
  • the video extension information is used to indicate the association of 3 or more channels of video.
  • the relevant introduction when live broadcast of two channels of video here No longer.
  • the above-mentioned embodiments are described by taking the integration of at least two first-type video files into one second-type video file as an example, and the method of the embodiment of the present application can also be applied to at least one first-type video file.
  • the video file and a second-class video file are integrated into a new second-class video file, in this case, the main video in the second-class video file used for video file integration is used as the integrated new second-class video file.
  • the master video in the class video file, the video in the first class video file used for video file integration is regarded as the slave video in the new second class video file after integration.
  • the related introduction of the integration of class video files into a second class video file will not be repeated here.
  • an audio file and a video file may also be integrated into one media file, or at least two audio files may be integrated into one media file, or an audio file and an image file may be integrated into one media file.
  • the electronic device can play the media file integrated by the two audio files according to the media extension information.
  • the media file includes audio data of the first channel of audio, audio data of the second channel of audio, and audio extension information.
  • the audio extension information includes the extension information of the first channel of audio and the extension information of the second channel of audio
  • the extension information of the first channel of audio includes the main audio identifier
  • the extension information of the second channel of audio includes the slave audio identifier and the slave audio playback position.
  • the identifier from the audio playback position identifier, is used to indicate the playback position of the first channel of audio when the second channel of audio starts to be played.
  • the audio playback position identifier is used to identify when the first channel of audio is played to time T, and the second channel of audio starts to be played. In this case, the electronic device can play the first channel of audio through the speaker. At time T, the second channel of audio is played through the speaker, and the first channel of audio is continued to be played.
  • the electronic device when the electronic device is connected to the headset, can play the first audio channel through the headset, play the second audio channel through the speaker when the first audio channel reaches time T, and continue to use the headset to play the first audio channel.
  • the electronic device may play the first channel of audio through the earphone, and when the first channel of audio is played to time T, the second channel of audio may be played through the earphone, and the first channel of audio may continue to be played through the earphone.
  • the electronic device may play the first channel of audio through the speaker, play the second channel of audio through the headset when the first channel of audio is played at time T, and continue to use the speaker to play the first channel of audio.
  • the above embodiments can also be extended to play scenarios of different media files, so that when playing a certain media file, the electronic device can automatically pull up another media file to play. Take the first media file and the second media file as an example.
  • the first media file is associated with first media file extension information
  • the first media file extension information includes a first media file identifier
  • the first media file identifier is used to identify the first media file
  • the first media file extension information is associated with the second media file.
  • the second media file extension information includes a second media file identification and a second media file playback position identification
  • the second media file identification is used to identify the second media file
  • the second media file playback position identification is used to identify the first media file.
  • the first play position information of the first media when the second media starts to play, and the first play position information includes the first position.
  • the extension information of the first media file is associated with the extension information of the first media file
  • the extension information of the first media file is associated with the extension information of the second media file by adding the extension information of the first media file and the extension information of the second media file to the first media file
  • the realization can also be realized by establishing a corresponding association relationship, which is not limited.
  • the electronic device plays the first media file in response to the operation of playing the first media file, and starts playing the second media file when the first media file is played to the first position.
  • the electronic device plays the second media file, it continues to play the first media file, that is, when the electronic device plays the first media file to the first position, the first media file and the second media file can be played simultaneously.
  • the operation of playing the first media file may be the user's operation on the first media file, or the user's operation on a control for indicating the first media file, or a certain shortcut operation, voice command, etc., which is not limited.
  • the first media file and the second media file may both be video files.
  • the first media file and the second media file can be played in a split screen.
  • the electronic device can split the display screen into the first screen and the second screen, and play the first media file on the first screen, and play the first media file on the second screen. Play the second media file.
  • a window is displayed on the display screen, and a picture corresponding to the second media file is displayed in the window. The window is suspended above the picture corresponding to the first media file, that is, the window used for playing the picture corresponding to the second media file is suspended above the window used for playing the picture corresponding to the first media file.
  • the first media file is a video file
  • the second media file is an audio file.
  • the electronic device plays the first media file muted through the display screen, and plays the second media file through the speaker.
  • the electronic device can play the second media file through the headset, and the electronic device plays the first media file through the speaker without mute, that is, the sound of the first media file is output through the speaker.
  • the first media file is an audio file
  • the second media file is a video file.
  • the electronic device plays the first media file through the speaker, and plays the second media file through the display screen with mute.
  • the electronic device if the electronic device is connected to the headset, the electronic device can play the second media file through the headset without mute. That is, the sound of the second media file is output through the earphone.
  • the first media file and the second media file are both audio files.
  • the electronic device can play the first media file through the speaker and play the second media file through the headset.
  • the electronic device may play the first media file through an earphone and play the second media file through a speaker.
  • the electronic device can simultaneously play the first media file and the second media file through the speaker.
  • the first media file extension information may also be associated with the third media file extension information.
  • the third media file extension information includes the third media file identifier and the third media file playing position identifier.
  • the third media file identifier is used to identify the third media file
  • the third media file playback position identifier is used to identify the second playback position information of the first media when the third media starts to play. Take the second playback position information including the second position as an example. In this case, when the first media file is played to the second position, the electronic device starts to play the third media file and continues to play the first media file. That is, the embodiment of the present application does not limit the number of media file extension information associated with the first media file extension information.
  • the first media file may include first media file extension information, second media file extension information and third media file extension information.
  • the electronic device displays an interface 1600
  • the interface 1600 includes option 1601 and option 1602
  • option 1601 is used to identify video file 1
  • option 1602 is used to identify video file 2
  • video file 1 and video file 1 are local It can also be saved on the server side, which is not limited.
  • the video file 1 is associated with the extension information of the first video file
  • the extension information of the first video file is associated with the extension information of the second video file
  • the extension information of the first video file includes the first video file identifier
  • the first video file identifier is used to identify the video file 1
  • the second video extension information includes a second video file identification and a second video file playback position identification
  • the second video file identification is used to identify the video file 2
  • the second video file playback position identification is used to identify time T
  • the electronic device plays the video file 1 in the window 1611 in response to the user clicking on the option 1601. For example, when the video file 1 is played to time T-1, the screen corresponding to the video file 1 displayed in the window 1611.
  • the electronic device When the video file 1 is played to time T, the electronic device displays the window 1612, and plays the video file 2 in the window 1612, and then continues to play the video file 1 in the window 1611. Further, in some embodiments, as shown in FIG. 16 , when the video file 1 is played to time M, the video file 2 has been played, and the two video files 1 have not been played, then the electronic device continues to play the video file in the window 1611. 1.
  • the methods provided by the embodiments of the present application are introduced from the perspective of an electronic device as an execution subject.
  • the electronic device may include a hardware structure and/or software modules, and implement the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether one of the above functions is performed in the form of a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraints of the technical solution.
  • An embodiment of the present application further provides a video playback apparatus, as shown in FIG. 17 , including one or more processors 1701 and one or more memories 1702 .
  • One or more computer programs are stored in the memory 1702, and when the one or more computer programs are executed by the processor 1701, the video playback device can execute the video playback method provided by the embodiments of the present application.
  • the video playback apparatus may further include a transceiver 1703 for communicating with other devices through a transmission medium, so that the video playback apparatus may communicate with other devices.
  • the transceiver 1703 may be a communication interface, a circuit, a bus, a module, or the like, and the other device may be a terminal, a server, or the like.
  • the transceiver 1703 may be used to send a video acquisition request to a video server, receive a video file, and the like.
  • the video playback device may further include a display screen 1704 for displaying the video to be played.
  • display screen 1704 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed
  • quantum dot light-emitting diode quantum dot light emitting diodes, QLED
  • the video playback device in the embodiment of the present application may further include a speaker, a touch sensor, etc., which is not limited.
  • connection medium between the processor 1701, the memory 1702, the transceiver 1703, and the display screen 1704 is not limited in this embodiment of the present application.
  • the processor 1701, the memory 1702, the transceiver 1703, and the display screen 1704 may be connected through a bus, and the bus may be divided into an address bus, a data bus, a control bus, and the like.
  • the processor may be a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, which can implement or
  • a general purpose processor may be a microprocessor or any conventional processor or the like.
  • the steps of the methods disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
  • the memory may be a non-volatile memory, such as a hard disk drive (HDD) or a solid-state drive (SSD), etc., or may also be a volatile memory (volatile memory), for example Random-access memory (RAM).
  • Memory is, but is not limited to, any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • the memory in this embodiment of the present application may also be a circuit or any other device capable of implementing a storage function, for storing program instructions and/or data.
  • the terms “when” or “after” can be interpreted to mean “if” or “after” or “in response to determining" or “in response to detecting ...”.
  • the phrases “in determining" or “if detecting (the stated condition or event)” can be interpreted to mean “if determining" or “in response to determining" or “on detecting (the stated condition or event)” or “in response to the detection of (the stated condition or event)”.
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present invention are generated.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server, or data center Transmission to another website site, computer, server, or data center is by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes an integration of one or more available media.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), and the like.
  • magnetic media eg, floppy disks, hard disks, magnetic tapes
  • optical media eg, DVD
  • semiconductor media eg, Solid State Disk (SSD)

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The present application relates to the technical field of video processing, and relates to a media playing method and an electronic device. The method comprises: an electronic device detects a first user input for a first media file; in response to the first user input, the electronic device plays the first media file; and when the first media file is played to a first location, the electronic device plays a second media file, and the electronic device continues to play the first media file, the first location being preset. The present application enables an electronic device to play multiple videos at the same time, thereby facilitating satisfying the requirement of a user for watching multiple videos at the same time, and improving the user experience.

Description

一种媒体播放方法及电子设备A kind of media playback method and electronic device
相关申请的交叉引用CROSS-REFERENCE TO RELATED APPLICATIONS
本申请要求在2020年12月31日提交中国专利局、申请号为202011634630.6、申请名称为“一种媒体播放方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number of 202011634630.6 and the application title of "a media playback method and electronic device" filed with the China Patent Office on December 31, 2020, the entire contents of which are incorporated into this application by reference middle.
技术领域technical field
本申请涉及视频处理技术领域,特别涉及一种媒体播放方法及电子设备。The present application relates to the technical field of video processing, and in particular, to a media playback method and electronic device.
背景技术Background technique
目前,常见的视频文件格式有MP4、AVI等,而这些格式的视频文件在某一时刻仅支持一路视频的播放,无法支持多路视频同时播放。因而无法满足用户全方位多视角的观看需求。例如,某一比赛直播场景,基于现有格式的视频文件,无法同时播放实时比赛的视频和精彩回放的视频。在播放精彩回放的视频时,如果比赛还在进行中,就容易导致用户错过一些实时比赛的画面,用户观看体验较差。At present, the common video file formats are MP4, AVI, etc., and the video files of these formats only support the playback of one video at a certain time, and cannot support the simultaneous playback of multiple videos. Therefore, it cannot meet the viewing needs of users in all directions and from multiple viewing angles. For example, in a live game scene, based on the existing format of video files, it is impossible to play the video of the real-time game and the video of the highlight replay at the same time. When playing a wonderful replay video, if the game is still in progress, it is easy to cause users to miss some real-time game pictures, and the user's viewing experience is poor.
发明内容SUMMARY OF THE INVENTION
为了解决上述技术问题,本申请提供了一种媒体播放方法及电子设备,使得电子设备可以同时播放多路视频,满足用户同时观看多路视频的需求,提高了用户体验。In order to solve the above technical problems, the present application provides a media playback method and an electronic device, so that the electronic device can play multiple video channels at the same time, so as to satisfy the user's requirement for watching multiple channels of videos at the same time, and improve the user experience.
第一方面,为本申请实施例的一种播放方法,应用于电子设备,具体包括:电子设备检测到针对第一媒体文件或第一控件的第一用户输入;响应于第一用户输入,电子设备播放第一媒体文件;在第一媒体文件播放至第一位置时,电子设备播放所述第二媒体文件,并且继续播放第一媒体文件;其中,第一位置为预先设置的。The first aspect is a playback method according to an embodiment of the present application, which is applied to an electronic device, and specifically includes: the electronic device detects a first user input for a first media file or a first control; The device plays the first media file; when the first media file is played to the first position, the electronic device plays the second media file and continues to play the first media file; wherein the first position is preset.
本申请实施例中,由于当第一媒体文件播放至第一位置时可以自动播放第二媒体文件,从而电子设备可以同时播放多个媒体文件。因此,在媒体文件为视频文件的情况下,有助于满足用户能够同时观看多个视角的视频的需求,提高用户体验。In the embodiment of the present application, since the second media file can be automatically played when the first media file is played to the first position, the electronic device can play multiple media files at the same time. Therefore, in the case where the media file is a video file, it is helpful to meet the user's requirement of being able to watch videos from multiple viewing angles at the same time, thereby improving the user experience.
在一种可能的设计中,电子设备包括显示屏;第一媒体文件和第二媒体文件在显示屏上播放;第一媒体文件和第二媒体文件都为视频文件。从而便于用户观看。In a possible design, the electronic device includes a display screen; the first media file and the second media file are played on the display screen; both the first media file and the second media file are video files. Thereby, it is convenient for users to watch.
在一种可能的设计中,电子设备包括扬声器;第一媒体文件和第二媒体文件在扬声器上播放;第一媒体文件和第二媒体文件都为音频文件。In one possible design, the electronic device includes a speaker; the first media file and the second media file are played on the speaker; and both the first media file and the second media file are audio files.
在一种可能的设计中,电子设备与耳机连接;第一媒体文件和第二媒体文件在耳机上播放;第一媒体文件和第二媒体文件都为音频文件。In a possible design, the electronic device is connected to the headset; the first media file and the second media file are played on the headset; both the first media file and the second media file are audio files.
在一种可能的设计中,电子设备包括显示屏和扬声器,第一媒体文件在显示屏上播放;第一媒体为视频文件;第二媒体文件在扬声器上播放;第二媒体文件为音频文件。In a possible design, the electronic device includes a display screen and a speaker, and the first media file is played on the display screen; the first media is a video file; the second media file is played on the speaker; and the second media file is an audio file.
在一种可能的设计中,电子设备包括显示屏,且电子设备与耳机连接,第一媒体文件在显示屏上播放;第一媒体文件为视频文件;第二媒体文件在所述耳机上播放;第二媒体文件为音频文件。In a possible design, the electronic device includes a display screen, and the electronic device is connected to a headset, and the first media file is played on the display screen; the first media file is a video file; the second media file is played on the headset; The second media file is an audio file.
在一种可能的设计中,电子设备包括显示屏,且电子设备与耳机连接,第一媒体文件在耳机上播放;第一媒体文件为音频文件;第二媒体文件在显示屏上播放;所第二媒体文件为视频文件。In a possible design, the electronic device includes a display screen, and the electronic device is connected to a headset, and the first media file is played on the headset; the first media file is an audio file; the second media file is played on the display screen; The second media file is a video file.
在一种可能的设计中,电子设备包括显示屏和扬声器,第一媒体文件在扬声器上播放;第一媒体文件为音频文件;第二媒体文件在显示屏上播放;第二媒体文件为视频文件。In a possible design, the electronic device includes a display screen and a speaker, and the first media file is played on the speaker; the first media file is an audio file; the second media file is played on the display screen; the second media file is a video file .
在一种可能的设计中,电子设备可以基于下列方式在第一媒体文件播放至第一位置时,播放所述第二媒体文件,并且继续播放所述第一媒体文件:In a possible design, the electronic device may play the second media file and continue to play the first media file when the first media file is played to the first position in the following manner:
在第一媒体文件播放至第一位置时,电子设备的显示屏分屏为第一屏和第二屏,在第二屏播放第二媒体文件,并且在第一屏继续播放第一媒体文件;其中,第一媒体文件和第二媒体文件都为视频文件。从而便于实现。When the first media file is played to the first position, the display screen of the electronic device is divided into a first screen and a second screen, the second media file is played on the second screen, and the first media file is continued to be played on the first screen; Wherein, both the first media file and the second media file are video files. So as to facilitate implementation.
在一种可能的设计中,电子设备可以基于下列方式在第一媒体文件播放至第一位置时,播放第二媒体文件,并且继续播放第一媒体文件:In a possible design, the electronic device may play the second media file and continue to play the first media file when the first media file is played to the first position based on the following methods:
在第一媒体文件播放至第一位置时,在显示屏上显示一个窗口,该窗口显示第二媒体文件对应的第二画面,且该窗口悬浮在第一媒体文件对应的第一画面之上;其中,第一媒体文件和第二媒体文件都为视频文件。不但便于实现,还降低了对观看第一媒体文件对应的第一画面的影响。When the first media file is played to the first position, a window is displayed on the display screen, and the window displays the second image corresponding to the second media file, and the window is suspended on the first image corresponding to the first media file; Wherein, both the first media file and the second media file are video files. Not only is the implementation convenient, but also the impact on viewing the first picture corresponding to the first media file is reduced.
在一种可能的设计中,第一位置包括以下的一种:第一媒体文件中预设的播放时间点、第一媒体文件中预设的播放帧、第一媒体文件预设的播放比例。In a possible design, the first position includes one of the following: a preset playback time point in the first media file, a preset playback frame in the first media file, and a preset playback ratio of the first media file.
在一种可能的设计中,第一位置为用户预先设置的。从而提高第一位置设置的灵活性。In a possible design, the first position is preset by the user. Thereby, the flexibility of setting the first position is improved.
在一种可能的设计中,第一媒体文件在播放时,关联第一媒体文件扩展信息;第一媒体文件扩展信息包括第一媒体文件标识;第一媒体文件标识用于标识第一媒体文件;第一媒体文件扩展信息关联第二媒体文件扩展信息;第二媒体文件扩展信息包括第二媒体文件标识和第二媒体文件播放位置标识;第二媒体文件标识用于标识第二媒体文件;第二媒体文件播放位置标识用于标识第二媒体开始播放时所述第一媒体的第一播放位置信息;第一播放位置信息包括所述第一位置。In a possible design, when the first media file is being played, extension information of the first media file is associated; the extension information of the first media file includes a first media file identifier; the first media file identifier is used to identify the first media file; The first media file extension information is associated with the second media file extension information; the second media file extension information includes the second media file identifier and the second media file playback position identifier; the second media file identifier is used to identify the second media file; The media file playback position identifier is used to identify the first playback position information of the first media when the second media starts to play; the first playback position information includes the first position.
例如,第一媒体文件包括媒体扩展信息,媒体扩展信息包括第一媒体文件扩展信息和第二媒体文件扩展信息。For example, the first media file includes media extension information, and the media extension information includes first media file extension information and second media file extension information.
在一种可能的设计中,在第二媒体文件播放完毕后,电子设备继续播放第一媒体文件;且电子设备只播放第一媒体文件。In a possible design, after the second media file is played, the electronic device continues to play the first media file; and the electronic device only plays the first media file.
在一种可能的设计中,在第一媒体文件播放至第二位置时,电子设备播放第三媒体文件,并且电子设备继续播放第一媒体文件;其中,第二位置为预先设置的;第二位置位于第二媒体文件播放完毕后。In a possible design, when the first media file is played to the second position, the electronic device plays the third media file, and the electronic device continues to play the first media file; wherein the second position is preset; the second The position is after the second media file has finished playing.
在一种可能的设计中,第二位置包括以下的一种:第一媒体文件中预设的播放时间点、第一媒体文件中预设的播放帧、第一媒体文件预设的播放比例。In a possible design, the second position includes one of the following: a preset playback time point in the first media file, a preset playback frame in the first media file, and a preset playback ratio of the first media file.
在一种可能的设计中,所述第一媒体文件扩展信息还关联第三媒体文件扩展信息;所述第三媒体文件扩展信息包括第三媒体文件标识和第三媒体文件播放位置标识;所述第三媒体文件标识用于标识所述第三媒体文件;所述第三媒体文件播放位置标识用于标识所述第三媒体开始播放时所述第一媒体的第二播放位置信息;所述第二播放位置信息包括所述第二位置。例如,第一媒体文件还包括第三媒体文件扩展信息。In a possible design, the first media file extension information is also associated with third media file extension information; the third media file extension information includes a third media file identifier and a third media file playback position identifier; the The third media file identifier is used to identify the third media file; the third media file playback position identifier is used to identify the second playback position information of the first media when the third media starts to play; The second playback position information includes the second position. For example, the first media file further includes extension information of the third media file.
在一种可能的设计中,所述第一媒体文件扩展信息为与所述第一多媒体多路信息 mmmw,所述第二媒体文件扩展信息为第二mmmw,所述第三媒体文件扩展信息为第三mmmw,所述第一mmmw、与所述第一mmmw对应的stsd、stts、stsc、stsz、stss、stco位于第一媒体流盒子stb1中,所述第二mmmw、与所述第二mmmw对应的stsd、stts、stsc、stsz、stss、stco位于第二媒体流盒子stb1中,所述第三mmmw、与所述第三mmmw对应的stsd、stts、stsc、stsz、stss、stco位于第三媒体流盒子stb1中,所述第一媒体流盒子stb1位于所述第一媒体盒子media中,所述第二媒体流盒子stb1位于所述第二媒体盒子media中,所述第三媒体流盒子stb1位于所述第三媒体盒子media中,所述第一媒体盒子media和与所述第一媒体盒子media对应的头tkhd位于第一流trak中,所述第二媒体盒子media和与所述第二媒体盒子media对应的头tkhd位于第二流trak中,所述第三媒体盒子media和与所述第三媒体盒子media对应的头tkhd位于第三流trak中,所述第一流trak、所述第二流trak、所述第三流trak、与所述第一流trak、所述第二流trak与所述第三流trak对应的头mvhd位于视频盒子moov中,所述视频盒子moov、媒体数据mdata和媒体类型ftyp位于MP4文件中,所述第一流trak用于指示所述第一媒体文件,所述第二流trak用于指示所述第二媒体文件,所述第三流trak用于指示所述第三媒体文件,所述第一媒体文件、所述第二媒体文件和所述第三媒体文件均为视频文件。从而便于实现。In a possible design, the extension information of the first media file is mmmw with the first multimedia multiplexing information, the extension information of the second media file is the second mmmw, and the extension information of the third media file is mmmw. The information is the third mmmw, the first mmmw, stsd, stts, stsc, stsz, stss, and stco corresponding to the first mmmw are located in the first media stream box stb1, the second mmmw, the The stsd, stts, stsc, stsz, stss, and stco corresponding to the second mmmw are located in the second media stream box stb1, and the third mmmw, stsd, stts, stsc, stsz, stss, and stco corresponding to the third mmmw are located in In the third media streaming box stb1, the first media streaming box stb1 is located in the first media box media, the second media streaming box stb1 is located in the second media box media, and the third media streaming box The box stb1 is located in the third media box media, the first media box media and the header tkhd corresponding to the first media box media are located in the first stream trak, the second media box media and the The header tkhd corresponding to the second media box media is located in the second stream trak, the third media box media and the header tkhd corresponding to the third media box media are located in the third stream trak, the first stream trak, the The second stream trak, the third stream trak, the header mvhd corresponding to the first stream trak, the second stream trak and the third stream trak are located in the video box moov, the video box moov, the media data mdata and media type ftyp are located in the MP4 file, the first stream trak is used to indicate the first media file, the second stream trak is used to indicate the second media file, and the third stream trak is used to indicate The third media file, the first media file, the second media file and the third media file are all video files. So as to facilitate implementation.
第二方面,为本申请实施例的一种视频文件的获取方法,应用于电子设备,所述电子设备包括显示屏;具体包括:A second aspect is a method for acquiring a video file according to an embodiment of the present application, applied to an electronic device, where the electronic device includes a display screen; specifically, it includes:
所述显示屏显示第一应用的第一界面;所述第一界面包括主视频文件设置控件、主视频文件预览框、第一从视频文件设置控件、第一从视频文件预览框、第一关联设置控件和完成控件;The display screen displays a first interface of the first application; the first interface includes a master video file setting control, a master video file preview frame, a first slave video file setting control, a first slave video file preview frame, a first association set controls and finish controls;
响应于对所述主视频文件设置控件的第一用户输入,所述显示屏显示所述第一应用的第二界面;所述第二界面包括第一视频文件和第二视频文件;in response to a first user input to the main video file setting control, the display screen displays a second interface of the first application; the second interface includes a first video file and a second video file;
响应于对所述第一视频文件的第二用户输入,所述主视频文件预览框显示所述第一视频文件的预览静态画面或预览动态画面;In response to a second user input to the first video file, the main video file preview box displays a preview static image or a preview dynamic image of the first video file;
响应于对所述第一从视频文件设置控件的第三用户输入,所述电子设备显示所述第二界面;The electronic device displays the second interface in response to a third user input to the first slave video file settings control;
响应于对所述第二媒体文件的第四用户输入,所述第一从视频文件预览框显示所述第二视频文件的预览静态画面或预览动态画面;In response to a fourth user input to the second media file, the first slave video file preview frame displays a preview static picture or a preview dynamic picture of the second video file;
响应于对所述第一关联设置控件的第五用户输入,所述显示屏显示一个关联设置框;所述关联设置框用于设置所述主视频文件在播放至第一位置时,所述第一从视频文件开始播放,并且所述第一从视频文件的播放不会暂停或停止所述主视频文件的播放;所述关联设置框包括第一位置输入框、第一确认控件和第二确认控件;In response to a fifth user input to the first association setting control, the display screen displays an association setting frame; the association setting frame is used to set the first association setting frame when the main video file is played to the first position, the first association setting frame A slave video file starts playing, and the playback of the first slave video file will not pause or stop the playback of the master video file; the association setting box includes a first position input box, a first confirmation control and a second confirmation control;
在接收到对所述第一位置输入框的第六用户输入后,且在接收到对所述第一确认控件的第七用户输入后,所述第一位置设置完毕;After receiving the sixth user input to the first position input box, and after receiving the seventh user input to the first confirmation control, the first position is set;
在接收到对所述完成控件的第八用户输入后,获取到第三视频文件。After receiving the eighth user input to the completion control, the third video file is acquired.
通过上述技术方案,使得用户可以根据自身的需要,将多个视频文件整合为一个视频文件,从而电子设备可以基于一个视频文件,同时播放多路视频。Through the above technical solutions, users can integrate multiple video files into one video file according to their own needs, so that the electronic device can play multiple videos simultaneously based on one video file.
在一种可能的设计中,响应于对第三视频文件的操作,电子设备可以播放第一视频文件;在第一视频文件播放至第一位置时,电子设备播放所述第二视频文件,并且继续播放第一视频文件。In a possible design, in response to the operation on the third video file, the electronic device may play the first video file; when the first video file is played to the first position, the electronic device plays the second video file, and Continue to play the first video file.
第三方面,为本申请实施例的一种电子设备,所述电子设备包括执行上述第一方面或者第一方面的任意一种可能的设计的方法的模块/单元;这些模块/单元可以通过硬件实现,也可以通过硬件执行相应的软件实现。A third aspect is an electronic device according to an embodiment of the application, and the electronic device includes modules/units for executing the above-mentioned first aspect or any possible design method of the first aspect; these modules/units can be implemented by hardware It can also be implemented by hardware executing corresponding software.
第四方面,为本申请实施例的一种电子设备,所述电子设备包括执行上述第二方面或者第二方面的任意一种可能的设计的方法的模块/单元;这些模块/单元可以通过硬件实现,也可以通过硬件执行相应的软件实现。A fourth aspect is an electronic device according to an embodiment of the present application, the electronic device includes modules/units for performing the above-mentioned second aspect or any possible design method of the second aspect; these modules/units can be implemented by hardware It can also be implemented by hardware executing corresponding software.
第五方面,为本申请实施例的一种媒体播放装置,所述媒体播放装置包括存储器、和处理器,以及计算机程序,所述计算机程序存储在存储器中,当所述计算机程序被所述执行时,使得所述媒体播放装置执行本申请实施例第一方面及其第一方面任一可能设计的技术方案。A fifth aspect is a media playback device according to an embodiment of the application, the media playback device includes a memory, a processor, and a computer program, the computer program is stored in the memory, and when the computer program is executed by the At the time, the media playback apparatus is caused to execute the first aspect of the embodiments of the present application and any possible design technical solutions of the first aspect.
在一种可能的设计中,所述媒体播放装置为电子设备、或者所述媒体播放装置为芯片。In a possible design, the media playback device is an electronic device, or the media playback device is a chip.
第六方面,为本申请实施例的一种媒体播放装置,所述媒体播放装置包括存储器、和处理器,以及计算机程序,所述计算机程序存储在存储器中,当所述计算机程序被所述执行时,使得所述媒体播放装置执行本申请实施例第二方面及其第二方面任一可能设计的技术方案。A sixth aspect is a media playback device according to an embodiment of the application, the media playback device includes a memory, a processor, and a computer program, the computer program is stored in the memory, and when the computer program is executed by the At the time, the media playback device is caused to execute the second aspect of the embodiments of the present application and any possible technical solutions of the second aspect.
在一种可能的设计中,所述媒体播放装置为电子设备、或者所述媒体播放装置为芯片。In a possible design, the media playback device is an electronic device, or the media playback device is a chip.
第七方面,为本申请实施例的一种计算机可读存储介质,所述计算机可读存储介质包括计算机程序,当计算机程序在电子设备上运行时,使得所述电子设备执行如上述第一方面及其第一方面任一可能设计的技术方案。A seventh aspect is a computer-readable storage medium according to an embodiment of the present application. The computer-readable storage medium includes a computer program. When the computer program runs on an electronic device, the electronic device is made to perform the above-mentioned first aspect. and any possible design technical solutions of the first aspect thereof.
第八方面,为本申请实施例的一种计算机可读存储介质,所述计算机可读存储介质包括计算机程序,当计算机程序在电子设备上运行时,使得所述电子设备执行如上述第二方面及其第二方面任一可能设计的技术方案。An eighth aspect is a computer-readable storage medium according to an embodiment of the application, the computer-readable storage medium includes a computer program, and when the computer program runs on an electronic device, the electronic device is made to perform the above-mentioned second aspect and any possible design technical solutions of the second aspect thereof.
第九方面,为本申请实施例的一种计算机程序产品,当其在计算机上运行时,使得所述计算机执行如上述第一方面及其第一方面任一可能设计的技术方案。A ninth aspect is a computer program product according to an embodiment of the present application, which, when running on a computer, enables the computer to execute the technical solutions of the first aspect and any possible designs of the first aspect.
第十方面,为本申请实施例的一种计算机程序产品,当其在计算机上运行时,使得所述计算机执行如上述第二方面及其第二方面任一可能设计的技术方案。A tenth aspect is a computer program product according to an embodiment of the present application, which, when running on a computer, causes the computer to execute the technical solutions of the second aspect and any possible designs of the second aspect.
其中,第三方面至第十方面的有益效果,请参见方法部分的有益效果,不重复赘述。For the beneficial effects of the third aspect to the tenth aspect, please refer to the beneficial effects of the method section, and will not be repeated.
附图说明Description of drawings
图1为本申请实施例的一种视频文件的结构示意图;1 is a schematic structural diagram of a video file according to an embodiment of the application;
图2为本申请实施例的一种视频扩展信息的结构示意图;2 is a schematic structural diagram of a kind of video extension information according to an embodiment of the application;
图3为本申请实施例的一种点播场景的网络架构的示意图;3 is a schematic diagram of a network architecture of an on-demand scenario according to an embodiment of the present application;
图4A为本申请实施例的一种视频处理设备的结构示意图;4A is a schematic structural diagram of a video processing device according to an embodiment of the application;
图4B为本申请实施例的一种视频播放设备的结构示意图;4B is a schematic structural diagram of a video playback device according to an embodiment of the application;
图5为本申请实施例的一种用于视频文件整合的界面示意图;5 is a schematic diagram of an interface for video file integration according to an embodiment of the present application;
图6为本申请实施例的另一用于视频文件整合的界面示意图;6 is another interface schematic diagram for video file integration according to an embodiment of the present application;
图7为本申请实施例的另一用于视频文件整合的界面示意图;7 is another interface schematic diagram for video file integration according to an embodiment of the present application;
图8为本申请实施例的一种视频文件存储位置界面的示意图;8 is a schematic diagram of a video file storage location interface according to an embodiment of the application;
图9A为本申请实施例的另一用于视频文件整合的界面示意图;9A is a schematic diagram of another interface for video file integration according to an embodiment of the present application;
图9B为本申请实施例的另一用于视频文件整合的界面示意图;9B is a schematic diagram of another interface for video file integration according to an embodiment of the present application;
图10为本申请实施例的一种视频播放的界面示意图;10 is a schematic interface diagram of a video playback according to an embodiment of the application;
图11为本申请实施例的另一视频播放的界面示意图;11 is a schematic interface diagram of another video playback according to an embodiment of the application;
图12为本申请实施例的另一视频播放的界面示意图;12 is a schematic interface diagram of another video playback according to an embodiment of the application;
图13为本申请实施例的另一界面的示意图;13 is a schematic diagram of another interface according to an embodiment of the application;
图14为本申请实施例的一种直播场景的网络架构示意图;14 is a schematic diagram of a network architecture of a live broadcast scenario according to an embodiment of the application;
图15A为本申请实施例的一种播放布局的示意图;15A is a schematic diagram of a playback layout according to an embodiment of the application;
图15B为本申请实施例的另一播放布局的示意图;15B is a schematic diagram of another playback layout according to an embodiment of the present application;
图16为本申请实施例的另一视频播放的界面示意图;16 is a schematic interface diagram of another video playback according to an embodiment of the application;
图17为本申请实施例的一种视频播放装置的结构示意图。FIG. 17 is a schematic structural diagram of a video playback apparatus according to an embodiment of the present application.
具体实施方式Detailed ways
下面结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。其中,在本申请实施例的描述中,以下实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请的限制。如在本申请的说明书和所附权利要求书中所使用的那样,单数表达形式“一个”、“一种”、“所述”、“上述”、“该”和“这一”旨在也包括例如“一个或多个”这种表达形式,除非其上下文中明确地有相反指示。还应当理解,在本申请以下各实施例中,“至少一个”、“一个或多个”是指一个或两个以上(包含两个)。术语“和/或”,用于描述关联对象的关联关系,表示可以存在三种关系;例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A、B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application. Wherein, in the description of the embodiments of the present application, the terms used in the following embodiments are only for the purpose of describing specific embodiments, and are not intended to be used as limitations of the present application. As used in the specification of this application and the appended claims, the singular expressions "a," "an," "the," "above," "the," and "the" are intended to also Expressions such as "one or more" are included unless the context clearly dictates otherwise. It should also be understood that, in the following embodiments of the present application, "at least one" and "one or more" refer to one or more than two (including two). The term "and/or", used to describe the association relationship of related objects, indicates that there can be three kinds of relationships; for example, A and/or B, can indicate: A alone exists, A and B exist at the same time, and B exists alone, A and B can be singular or plural. The character "/" generally indicates that the associated objects are an "or" relationship.
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。术语“连接”包括直接连接和间接连接,除非另外说明。“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。References in this specification to "one embodiment" or "some embodiments" and the like mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," "in other embodiments," etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean "one or more but not all embodiments" unless specifically emphasized otherwise. The terms "including", "including", "having" and their variants mean "including but not limited to" unless specifically emphasized otherwise. The term "connected" includes both direct and indirect connections unless otherwise specified. "First" and "second" are only for descriptive purposes, and cannot be understood as indicating or implying relative importance or implying the number of indicated technical features.
在本申请实施例中,“示例性地”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性地”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性地”或者“例如”等词旨在以具体方式呈现相关概念。In the embodiments of the present application, words such as "exemplarily" or "for example" are used to represent examples, illustrations or illustrations. Any embodiment or design described in the embodiments of the present application as "exemplarily" or "such as" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplarily" or "such as" is intended to present the related concepts in a specific manner.
需要指出的是,本申请实施例中涉及的“第一”、“第二”等词汇,仅用于区分描述的目的,而不能理解为指示或暗示相对重要性,也不能理解为指示或暗示顺序。It should be pointed out that words such as "first" and "second" involved in the embodiments of the present application are only used for the purpose of distinguishing and describing, and cannot be understood as indicating or implying relative importance, nor can they be understood as indicating or implying order.
为了解决上述技术问题,本申请提供了一种媒体播放方法,使得电子设备能够根据一个媒体文件,同时播放多路媒体,例如第一媒体和第二媒体。在媒体为视频的情况下,使得用户可以同时观看多路视频的需求,有助于提高用户体验。In order to solve the above technical problems, the present application provides a media playback method, which enables an electronic device to simultaneously play multiple media, such as a first media and a second media, according to one media file. In the case where the media is video, the requirement of enabling users to watch multiple video channels at the same time helps to improve user experience.
应理解,在本申请实施例中,媒体可以为视频、音频、图像、动画等,媒体文件可以为视频文件、音频文件、图像文件等文件。下面以媒体文件为视频文件为例,对本申请实施例的媒体播放方法进行介绍,在媒体文件为音频文件或图像文件时,可以参见视频文件 的相关介绍。It should be understood that, in this embodiment of the present application, the media may be video, audio, images, animations, etc., and the media files may be files such as video files, audio files, image files, and the like. The following describes the media playback method of the embodiment of the present application by taking the media file as a video file as an example. When the media file is an audio file or an image file, reference may be made to the related introduction of the video file.
在一些实施例中,通过在视频文件增加视频扩展信息,使得视频文件能够支持同时播放多路视频,满足用户同时观看多路视频的需求。其中,视频扩展信息用于指示至少两路视频的关联关系。例如,视频文件的视频扩展信息用于指示,实时比赛的视频与精彩回放的视频的关联关系。视频播放设备可以基于视频文件,同时播放实时比赛的视频和精彩回放的视频。In some embodiments, by adding video extension information to the video file, the video file can support the simultaneous playback of multiple videos, so as to satisfy the user's requirement for watching multiple videos at the same time. Wherein, the video extension information is used to indicate the association relationship of at least two channels of video. For example, the video extension information of the video file is used to indicate the association between the video of the real-time game and the video of the highlight replay. The video playback device can simultaneously play the video of the real-time game and the video of the highlight replay based on the video file.
以视频扩展信息用于指示第一路视频与第二路视频的关联关系为例。示例性地,视频扩展信息包括第一路视频的扩展信息和第二路视频的扩展信息。第一路视频的扩展信息包括主视频标识;主视频标识用于标识第一路视频为主视频。第二路视频的扩展信息包括从视频标识和从视频播放的关联标识。从视频标识用于指示第二路视频为从视频;从视频播放的关联标识又可以称之为从视频播放位置标识,用于标识在主视频播放至第一播放位置时,从视频开始播放。有关第一播放位置,可以采用帧、时间、比例等来标记。比如,第一播放位置为帧,可以设定在主视频播放至第10帧时,开始播放从视频;第一播放位置为时间,可以设定在主视频播放至1分20秒时,开始播放从视频;第一播放位置为比例,可以设定在主视频播放至1/3时,开始播放从视频。Take the example that the video extension information is used to indicate the association relationship between the first channel of video and the second channel of video. Exemplarily, the video extension information includes extension information of the first video channel and extension information of the second video channel. The extended information of the first channel video includes a main video identifier; the main video identifier is used to identify the first channel video as the main video. The extended information of the second channel video includes the slave video identifier and the associated identifier played from the slave video. The slave video identifier is used to indicate that the second video is a slave video; the associated identifier of the slave video playback can also be called a slave video playback position identifier, which is used to indicate that the slave video starts to play when the master video is played to the first playback position. Regarding the first playback position, the frame, time, scale, etc. may be used to mark. For example, if the first playback position is the frame, it can be set to start playing the secondary video when the main video reaches the 10th frame; the first playback position is the time, and it can be set to start playing when the main video reaches 1 minute and 20 seconds. Slave video; the first playback position is the ratio, which can be set to start playing the slave video when the main video is played to 1/3.
需要说明的是,上述仅是以视频扩展信息用于指示第一路视频与第二路视频的关联关系为例进行介绍的,并不够成对视频扩展信息的限定。在本申请实施例中,视频扩展信息可以用于指示多路视频的关联关系。例如,从视频可以有多路。多路从视频可以称为第一从视频、第二从视频…等。在有多路从视频的情形下,可以只有部分从视频与主视频有直接的关联关系,而剩余的从视频与上述部分从视频有直接的关联关系。比如,以视频扩展信息用于指示第一路视频、第二路视频和第三路视频的关联关系为例。视频扩展信息包括第一路视频的扩展信息、第二路视频的扩展信息和第三路视频的扩展信息。第一路视频的扩展信息包括主视频标识,第二路视频的扩展信息包括从视频标识和第二从视频播放的关联标识,第三路视频的扩展信息包括从视频标识和第三从视频播放的关联标识。第二从视频播放的关联标识用于标识主视频播放至第一播放位置时,第二从视频开始播放。第三从视频播放的关联标识用于标识第二从视频播放至第二播放位置时,第三从视频开始播放。可以理解的是,第三从视频播放的关联标识也可以用于标识主视频播放至第三播放位置时,第三从视频开始播放。需要说明的是,第一播放位置和第三播放位置可以相同,也可以不同,对此不作限定。It should be noted that the above description is only given by taking the video extension information for indicating the association relationship between the first channel of video and the second channel of video for introduction, and is not enough to limit the paired video extension information. In this embodiment of the present application, the video extension information may be used to indicate the association relationship of multiple video channels. For example, there can be multiple channels from the video. Multiple slave videos may be referred to as first slave video, second slave video...etc. In the case of multiple slave videos, only part of the slave videos may have a direct relationship with the master video, and the remaining slave videos may have a direct relationship with the above-mentioned part of the slave videos. For example, take the example that the video extension information is used to indicate the association relationship between the first channel of video, the second channel of video, and the third channel of video. The video extension information includes extension information of the first video channel, extension information of the second video channel and extension information of the third video channel. The extended information of the first video includes the main video identifier, the extended information of the second video includes the associated identifier of the slave video identifier and the second slave video, and the extended information of the third video includes the slave video identifier and the third slave video. the associated identifier. The associated identifier of the second secondary video playback is used to identify that when the primary video is played to the first playback position, the secondary secondary video starts to be played. The associated identifier of the third slave video playback is used to identify that when the second slave video is played to the second playback position, the third slave video starts to play. It can be understood that, the associated identifier of the third slave video playback can also be used to identify that when the main video is played to the third playback position, the third slave video starts playing. It should be noted that the first playback position and the third playback position may be the same or different, which is not limited.
当然,在从视频有多路的情况下,视频扩展信息也可以用于指示多路从视频分别与主视频的关联关系。例如,视频扩展信息用于指示第一路视频与第二路视频的关联关系,和第一路视频和第三路视频的关联关系。Of course, in the case of multiple channels of slave videos, the video extension information can also be used to indicate the association relationship between the multiple channels of slave videos and the master video respectively. For example, the video extension information is used to indicate the relationship between the first channel of video and the second channel of video, and the relationship between the first channel of video and the third channel of video.
以视频扩展信息包括第一路视频的扩展信息和第二路视频的扩展信息为例,第二路视频扩展信息包括从视频标识和从视频播放的关联标识,在从视频播放的关联标识用于标识在主视频播放至第一播放位置时,从视频开始播放的情况下,示例的,从视频播放的关联标识可以为主视频的帧序号(如sample index)。即第二路视频的扩展信息包括的主视频帧序号,当主视频帧序号用于标识主视频的第N帧时,其中N为大于或等于1的正整数,则在主视频播放至第N帧时,开始播放从视频。即第一路视频的第N帧与第二路视频的第1帧同时播放,第一路视频的第N+1帧与第二路视频的第2帧同时播放,第一路视频的第N+2帧与第二路视频的第3帧同时播放,以此类推,直至第二路视频播放结束,不再播放 第二路视频,如果第一路视频仍未结束,则继续播放第一路视频。Taking the video extension information including the extension information of the first video and the extension information of the second video as an example, the second video extension information includes the associated identification from the video identification and the playback from the video, and the associated identification played from the video is used for When the identifier starts playing from the video when the main video is played to the first playback position, for example, the associated identifier for playing the slave video may be the frame number (eg, sample index) of the main video. That is, the main video frame serial number included in the extended information of the second video, when the main video frame serial number is used to identify the Nth frame of the main video, where N is a positive integer greater than or equal to 1, then the main video is played to the Nth frame. , start playing the slave video. That is, the Nth frame of the first channel of video is played simultaneously with the first frame of the second channel of video, the N+1th frame of the first channel of video is played simultaneously with the second frame of the second channel of video, and the Nth of the first channel of video is played at the same time. +2 frames are played at the same time as the 3rd frame of the second channel video, and so on, until the second channel video playback ends, the second channel video will not be played, if the first channel video is still not over, the first channel will continue to be played video.
在本申请实施例中从视频播放的关联标识还可以通过其它信息来标识在主视频处于什么播放位置时,开始播放从视频。例如主视频中某一帧的起始播放时刻、从视频中某一帧的起始播放时刻、从视频的帧标识等,对此不作限定。In the embodiment of the present application, the associated identifier for playing the slave video may also use other information to identify the playback position of the master video when the slave video starts to be played. For example, the starting playback time of a certain frame in the master video, the starting playback time of a certain frame in the slave video, the frame identification of the slave video, etc., are not limited thereto.
首先,对本申请实施例中涉及的部分名词进行解释,以便于本领域技术人员理解。First, some terms involved in the embodiments of the present application are explained to facilitate understanding by those skilled in the art.
1、视频文件。本申请实施例中基于是否支持同时播放至少两路视频将视频文件划分为两类,分别为第一类视频文件和第二类视频文件。第一类视频文件只能支持播放一路视频。第二类视频文件支持同时播放至少两路视频。示例的,第一类视频文件不包括视频扩展信息,可以参见现有协议中的关于视频文件的规定,在此不再赘述。1. Video files. In the embodiment of the present application, the video files are divided into two types based on whether the simultaneous playback of at least two channels of video is supported, which are the first type of video files and the second type of video files respectively. The first type of video file can only support playing one video. The second type of video file supports playing at least two channels of video at the same time. For example, the first type of video file does not include video extension information, and reference may be made to the regulations on video files in the existing protocol, which will not be repeated here.
示例的,第二类视频文件包括视频扩展信息。例如,本申请实施例中第二类视频文件可以包括视频元数据、视频数据和音频数据。其中,视频元数据为视频数据和音频数据的描述信息,用于索引到相应的视频数据和音频数据。比如,视频元数据包括视频扩展信息。该视频扩展信息用于描述多路视频的关联关系。在本申请的另一些实施例中,视频元数据还可以包括视频类型(主视频、还是从视频)等。视频数据用于描述视频中图像,可以为图像序列。对于某一个第二类视频文件来说,可以包括一路或多路视频的视频数据。音频数据用于描述视频中的声音,可以理解为数字化的声音数据。此外,对于某一个第二类视频文件来说,也可以包括一路或多路视频的音频数据。例如,一个第二类视频文件包括视频元数据、N路视频的视频数据和M路视频的音频数据。N和M为正整数,N和M的取值可以相同,也可以不同。例如,N和M的取值相同,M路视频的音频数据与N路视频的视频数据分别相对应。再例如,N与M的取值不同,N大于1,M等于1,在这种情况下,视频文件中的音频数据可以对应N路视频中某一路视频的视频数据,如主视频的视频数据,或者某一从视频的视频数据。此外,还需要说明的是,对于一路视频来说,以主视频为例,主视频的音频数据可以包括一个或多个音轨的音频数据。例如,主视频的音频数据包括两个音轨的音频数据,其中,一个音轨的音频数据用于描述视频中的使用某一种语言(如中文)的声音,另一个音轨的音频数据用于描述视频中的使用另一种语言(如英文)的声音。在这种情况下,用户可以在播放主视频时,根据需要选择使用中文还是英文播放声音。Exemplarily, the second type of video file includes video extension information. For example, the second type of video file in this embodiment of the present application may include video metadata, video data, and audio data. The video metadata is description information of the video data and the audio data, and is used for indexing to the corresponding video data and audio data. For example, the video metadata includes video extension information. The video extension information is used to describe the association relationship of multiple video channels. In other embodiments of the present application, the video metadata may further include the video type (main video, or secondary video) and the like. Video data is used to describe the images in the video and can be a sequence of images. For a certain second type of video file, it may include video data of one or more channels of video. Audio data is used to describe the sound in the video and can be understood as digitized sound data. In addition, for a certain video file of the second type, audio data of one or more channels of video may also be included. For example, a second-type video file includes video metadata, video data of N-channel video, and audio data of M-channel video. N and M are positive integers, and the values of N and M may be the same or different. For example, the values of N and M are the same, and the audio data of the M-channel video corresponds to the video data of the N-channel video respectively. For another example, the values of N and M are different, N is greater than 1, and M is equal to 1. In this case, the audio data in the video file can correspond to the video data of a certain channel of the N videos, such as the video data of the main video. , or some video data from a video. In addition, it should also be noted that, for a channel of video, taking the main video as an example, the audio data of the main video may include audio data of one or more audio tracks. For example, the audio data of the main video includes audio data of two audio tracks, wherein the audio data of one audio track is used to describe the sound in a certain language (such as Chinese) in the video, and the audio data of the other audio track is used to describe the sound in a certain language (such as Chinese) in the video. To describe the voice in another language (such as English) in the video. In this case, the user can choose to play the sound in Chinese or English as needed when playing the main video.
以第二类视频文件的视频格式为MP4为例。在这种情况下,该第二类视频文件又可以称之为MP4文件。在一些实施例中,如图1所示,MP4文件包括ftyp(file type box)、mdata(media data box)和moov(movie box)。ftyp包含视频文件格式信息,在这里ftyp包含的视频文件格式信息用于标识视频文件格式为MP4。mdata用于存储媒体数据。应理解,本申请实施例中的视频数据、音频数据均可以称之为媒体数据。在一些实施例中,视频数据和音频数据分别可以基于chunk-sample划分,以便于查找。在这种情况下,mdata是以chunk为单位存储媒体数据的,一个chunk可以包括一个或多个sample,sample是媒体数据(视频数据或音频数据)的最小存储单元。moov包含视频元数据。示例的,moov是一个container box,所包含的视频元数据可以由相应的子box诠释。例如,moov包括mvhd(movie header box)和至少一个trak(track box)。其中,mvhd用于存储MP4文件的创建时间、修改时间、时长、推荐播放速率等信息。trak用于存储视频数据或者音频数据的相关描述信息。以一个trak为例,在一些实施例中,trak是一个container box,可以包括tkhd(track header box)和mdia(media box)等。tkhd包含一路视频的媒体数据(音频数据或视频数据)的 总体信息,例如track id、视频时长等。track id用于标识一路视频。示例的,mdia是一个container box,可以包括mdhd(media header box)、hdlr(handler reference box)和minf(media information box)。mdha用于定义time scale等。hdlr包含与视频播放过程相关的信息,例如数据类型(音频、视频等)。示例的,minf是一个container box,可以包括stb1(sample table box)。stb1包括stsd(sample description box)、stts(time to sample box)、stsc(sample to chunk box)、stsz(sample size box)、stss(sync sample box)、stco(chunk offset box)等。stsd包含视频的编码类型、宽高、长度、音频的声道、采样等信息。stts包含sample的时序映射信息,如sample index、与sample index对应的时间偏移量等。stsc包含sample与chunk的映射关系。stsz用于存储媒体数据的每个sample的大小。stss可以包含可随机访问的sample index列表,该sample index列表中的每个sample index用于指示一个关键帧。本申请实施例中,关键帧可以理解为携带全量数据的视频帧,无需参考之前的视频帧就可以独立解码为一帧视频画面。非关键帧可以理解为增量数据的视频帧,需要参考其它帧才能解码出一帧视频画面。只有关键帧才能随机访问。stco用于定义每个chunk在媒体数据中的位置,例如,stco包含每个chunk在媒体数据中的偏移。具体的,关于ftyp、mdata、moov可以参见现有协议中的相关介绍,在此不再详述。Take the video format of the second type of video file as MP4 as an example. In this case, the second type of video file can also be referred to as an MP4 file. In some embodiments, as shown in Figure 1, an MP4 file includes ftyp (file type box), mdata (media data box), and moov (movie box). ftyp includes video file format information, where the video file format information included in ftyp is used to identify the video file format as MP4. mdata is used to store media data. It should be understood that the video data and audio data in the embodiments of the present application may be referred to as media data. In some embodiments, video data and audio data may be divided based on chunk-samples, respectively, to facilitate searching. In this case, mdata stores media data in chunks. A chunk may include one or more samples, and a sample is the smallest storage unit of media data (video data or audio data). moov contains video metadata. Illustratively, moov is a container box that contains video metadata that can be interpreted by corresponding sub-boxes. For example, moov includes mvhd (movie header box) and at least one trak (track box). Among them, mvhd is used to store information such as creation time, modification time, duration, and recommended playback rate of the MP4 file. trak is used to store related description information of video data or audio data. Taking a trak as an example, in some embodiments, the trak is a container box, which may include tkhd (track header box) and mdia (media box), etc. tkhd contains the overall information of the media data (audio data or video data) of a video, such as track id, video duration, etc. track id is used to identify a video. For example, mdia is a container box, which can include mdhd (media header box), hdlr (handler reference box) and minf (media information box). mdha is used to define time scale, etc. hdlr contains information related to the video playback process, such as data type (audio, video, etc.). For example, minf is a container box, which can include stb1 (sample table box). stb1 includes stsd(sample description box), stts(time to sample box), stsc(sample to chunk box), stsz(sample size box), stss(sync sample box), stco(chunk offset box), etc. stsd contains the encoding type, width, height, length, audio channel, sampling and other information of the video. stts contains the timing mapping information of the sample, such as sample index, time offset corresponding to the sample index, etc. stsc contains the mapping relationship between sample and chunk. The size of each sample used by stsz to store media data. stss may contain a randomly accessible list of sample indices, each sample index in the list of sample indices being used to indicate a keyframe. In this embodiment of the present application, a key frame can be understood as a video frame that carries a full amount of data, and can be independently decoded into a video frame without referring to a previous video frame. Non-key frames can be understood as video frames of incremental data, and it is necessary to refer to other frames to decode a frame of video. Only keyframes have random access. stco is used to define the position of each chunk in the media data, for example, stco contains the offset of each chunk in the media data. Specifically, for ftyp, mdata, and moov, please refer to the relevant introduction in the existing protocol, which will not be described in detail here.
在MP4文件为第二类视频文件的情况下,进一步的,在一些实施例中,通过在用于描述视频数据的trak中的stb1增加字段mmmw(多媒体多路信息,multi media multi way),从而在MP4文件中增加对应的视频扩展信息。示例的,mmmw包括一路视频的扩展信息。示例的,在视频类型不同的情况下,对应的stb1中mmmw包括的扩展信息可以不同。例如,在trak中track id标识的视频的视频类型为主视频时,对应stb1中的mmmw可以包括主视频标识。在一些实施例中,在trak中track id标识的视频的视频类型为主视频时,对应stb1中的mmmw还可以包括主视频对应音频数据的track id。再例如,在trak中track id标识的视频的视频类型为从视频时,对应stb1中的mmmw可以包括从视频标识和从视频播放的关联标识,从视频播放的关联标识用于标识从视频开始播放时主视频的播放位置。例如,从视频播放的关联标识可以为sample index。在一些实施例中,在trak中track id标识的视频的视频类型为从视频时,对应stb1中的mmmw可以包括从视频对应音频数据的track id。In the case where the MP4 file is the second type of video file, further, in some embodiments, the field mmmw (multimedia multiway) is added to the stb1 in the trak used to describe the video data, thereby Add the corresponding video extension information to the MP4 file. For example, mmmw includes extended information of one video. For example, in the case of different video types, the extension information included in mmmw in the corresponding stb1 may be different. For example, when the video type of the video identified by the track id in the trak is the main video, the mmmw in the corresponding stb1 may include the main video identifier. In some embodiments, when the video type of the video identified by the track id in the trak is the main video, the mmmw in the corresponding stb1 may also include the track id of the audio data corresponding to the main video. For another example, when the video type of the video identified by the track id in the trak is from the video, the mmmw in the corresponding stb1 can include the associated identification from the video identification and the playback from the video, and the associated identification from the video playback is used to identify the start of playing from the video. the playback position of the main video. For example, the associated identifier for playback from video could be sample index. In some embodiments, when the video type of the video identified by the track id in the trak is the slave video, the mmmw in the corresponding stb1 may include the track id of the audio data corresponding to the slave video.
示例的,mmmw是stb1的子box,也可以理解为一个box,其结构可以如图2所示,包括以下字段:box size、box type、version、flags、role、audio track id、sample index。例如,box size可以占用4个字节,用于表示box的大小;box type可以占用4个字节,用于表示box的类型;version可以占用1个字节,用于表示box的版本;flags可以占用3字节,用于扩展一些功能的标志位;role可以占用4字节,用于标识视频类型(主视频或从视频);audio track id可以占用4字节,用于标识音频数据;sample index可以占用4字节,用于标识从视频开始播放时主视频的播放位置。其中,对于主视频和从视频来说,mmmw的结构可以相同,在这种情况下,对于主视频来说,以下字段如sample index、audio track id可以为特殊标识或者为空,特殊标识用于指示该字段无效。对于从视频来说,audio track id可以为特殊标识或者为空。或者,对于主视频和从视频来说,mmmw的结构可以不同。在这种情况下,对于主视频来说,mmmw中sample index和audio track id可以是可选的字段,而对于从视频来说,audio trcak id可以是可选的字段。例如,对于主视频来说,mmmw中可以不包括以下字段:sample index或audio track id。或者,再例如,对于从视频来说, mmmw中可以不包括audio track id。需要说明的是,图2仅为一种mmmw的结构的举例说明,并不构成对本申请实施例的限定。例如,role还可以占用两个字节等。For example, mmmw is a sub-box of stb1, which can also be understood as a box. Its structure can be shown in Figure 2, including the following fields: box size, box type, version, flags, role, audio track id, sample index. For example, box size can take up 4 bytes to indicate the size of the box; box type can take up 4 bytes to indicate the type of the box; version can take up 1 byte to indicate the version of the box; flags It can occupy 3 bytes, which is used to extend the flag bits of some functions; the role can occupy 4 bytes, which is used to identify the video type (main video or slave video); the audio track id can occupy 4 bytes, which is used to identify the audio data; The sample index can occupy 4 bytes and is used to identify the playback position of the main video when the video starts to play. Among them, the structure of mmmw can be the same for the master video and the slave video. In this case, for the master video, the following fields such as sample index and audio track id can be special identifiers or empty. The special identifiers are used for Indicates that the field is invalid. For slave videos, the audio track id can be a special identifier or empty. Alternatively, the structure of mmmw can be different for master and slave video. In this case, for the master video, the sample index and audio track id in mmmw can be optional fields, and for the slave video, the audio trcak id can be optional fields. For example, for the main video, the following fields may not be included in mmmw: sample index or audio track id. Or, for another example, for the slave video, the audio track id may not be included in the mmmw. It should be noted that FIG. 2 is only an example of a structure of mmmw, and does not constitute a limitation to the embodiments of the present application. For example, role can also occupy two bytes and so on.
还需要说明的是,上述是以用于携带视频的扩展信息字段为mmmw为例进行说明的,本申请实施例对用于携带视频的扩展信息字段的名称不作限定,例如,用于携带视频的扩展信息的字段还可以称之为视频扩展字段等。It should also be noted that the above description is based on the example that the extended information field used for carrying video is mmmw, and the embodiment of this application does not limit the name of the extended information field used to carry video. The field of the extended information may also be referred to as a video extension field or the like.
2、视频播放设备。本申请实施例中的视频播放设备为电子设备,如便携式终端。便携式终端如,手机、平板电脑、笔记本电脑等。在这种情况下,视频播放设备可以响应于对安装在便携式终端上的视频应用(如
Figure PCTCN2021140717-appb-000001
)的操作,播放相应的视频。在另一些实施例中,视频播放设备还可以为非便携式终端,例如智慧屏、台式机电脑或电视机等。示例的,在视频播放设备为电视机的情况下,可以响应于用户选中某一频道的操作,播放相应的视频。进一步的,在一些实施例中,视频播放设备还可以支持整合视频文件。以视频播放设备为手机为例,手机响应于用户对自身安装的具有视频整合功能的应用(例如相机应用、图库应用或视频应用等)的操作,将至少两个第一类视频文件整合成一个第二类视频文件,并保存到本地和/或上传到视频服务器,从而使得手机可以基于第二类视频文件,同时播放至少两路视频。上述至少两个第一类视频文件为预先录制好的视频文件,可以存储在视频播放设备上,也可以是存储在网盘或服务器(如视频服务器)上,本申请实施例对预先录制好的视频文件的存储位置不作限定。进一步,在一些实例中,手机将至少两个第一类视频文件整合为一个第二类视频文件后,保存该第二类视频文件。例如,手机将该第二视频文件保存到本地和/或上传到服务器或网盘等。
2. Video playback equipment. The video playback device in the embodiment of the present application is an electronic device, such as a portable terminal. Portable terminals such as mobile phones, tablet computers, notebook computers, etc. In this case, the video playback device can respond to a video application installed on the portable terminal (such as
Figure PCTCN2021140717-appb-000001
) to play the corresponding video. In other embodiments, the video playback device may also be a non-portable terminal, such as a smart screen, a desktop computer, or a television. For example, in the case where the video playing device is a TV, the corresponding video can be played in response to the user's operation of selecting a certain channel. Further, in some embodiments, the video playback device may also support the integration of video files. Taking the video playback device as a mobile phone as an example, the mobile phone integrates at least two first-type video files into a The second type of video file is saved locally and/or uploaded to the video server, so that the mobile phone can play at least two channels of video simultaneously based on the second type of video file. The above-mentioned at least two video files of the first type are pre-recorded video files, which can be stored on a video playback device, or can be stored on a network disk or a server (such as a video server). The storage location of video files is not limited. Further, in some instances, after integrating at least two first-type video files into one second-type video file, the mobile phone saves the second-type video file. For example, the mobile phone saves the second video file locally and/or uploads it to a server or a network disk.
3、视频处理设备。本申请实施例中的视频处理设备为电子设备,如便携式终端。例如,便携式终端可以为手机、平板电脑、笔记本电脑等。或者,本申请实施例中的视频处理设备也可以非便携式终端,例如智慧屏、台式机电脑、或电视机等,对此不作限定。需要说明的是,在视频处理设备为便携式终端或者非便携式终端时,视频处理设备与视频播放设备可以为同一设备,也可以为不同设备,对此不作限定。又或者,视频处理设备还可以为视频服务器等,本申请实施例中的视频服务器可以为云服务器,也可以为本地服务器,对此不作限定。示例的,视频处理设备具有视频整合功能,支持将至少两个第一类视频文件整合成一个第二类视频文件。3. Video processing equipment. The video processing device in the embodiment of the present application is an electronic device, such as a portable terminal. For example, the portable terminal may be a mobile phone, a tablet computer, a notebook computer, or the like. Alternatively, the video processing device in this embodiment of the present application may also be a non-portable terminal, such as a smart screen, a desktop computer, or a television, etc., which is not limited thereto. It should be noted that when the video processing device is a portable terminal or a non-portable terminal, the video processing device and the video playback device may be the same device or different devices, which are not limited. Alternatively, the video processing device may also be a video server or the like, and the video server in this embodiment of the present application may be a cloud server or a local server, which is not limited. Exemplarily, the video processing device has a video integration function, which supports integrating at least two first-type video files into one second-type video file.
本申请实施例的视频播放方法可以应用于点播场景,也可以应用于直播场景。下面结合具体的应用场景,对本申请实施例的视频播放方法进行相应的说明。The video playback method in the embodiment of the present application can be applied to a VOD scenario or a live broadcast scenario. The following describes the video playback method according to the embodiment of the present application with reference to specific application scenarios.
图3示出了一种点播场景的网络架构,包括视频服务器和视频播放设备。其中,视频服务器用于接收来自视频播放设备的视频获取请求,向视频播放设备发送相应的视频文件。视频播放设备用于接收用户播放某一视频的操作,并响应于上述操作,向视频服务器发送视频获取请求,以及接收来自视频服务器的视频文件,并根据视频文件进行相应的播放。示例的,视频播放设备响应于用户对自身安装的视频应用(如华为视频)中的某一视频选项的操作,向视频服务器发送视频获取请求,以及接收来自视频服务器响应于视频获取请求发送的视频文件,并根据该视频文件,播放视频。FIG. 3 shows a network architecture of an on-demand scenario, including a video server and a video playback device. The video server is configured to receive a video acquisition request from a video playback device, and send a corresponding video file to the video playback device. The video playback device is used to receive an operation of a user to play a certain video, and in response to the above operation, send a video acquisition request to a video server, receive a video file from the video server, and play correspondingly according to the video file. For example, the video playback device sends a video acquisition request to the video server in response to the user's operation on a certain video option in a video application (such as Huawei Video) installed by itself, and receives a video from the video server in response to the video acquisition request. file, and based on the video file, play the video.
在视频播放设备接收到来自视频服务器的视频文件为第二类视频文件的情况下,视频播放设备可以根据该视频文件,播放多路视频。在一些实施例中,视频播放设备可以根据视频扩展信息,自适应相应的视频播放布局,并进行多路视频的播放。示例的,在视频扩展信息用于指示两路视频的关联关系的情况下,视频播放设备根据视频扩展信息,在小窗 口内播放从视频,在大窗口内播放主视频。以视频扩展信息用于第一路视频与第二路视频的关联关系为例。例如,视频扩展信息包括第一路视频的扩展信息和第二路视频的扩展信息,第一路视频的扩展信息包括主视频标识,第二路视频的扩展信息包括从视频标识和从视频播放的关联标识,其中,主视频帧标识用于标识主视频中第N帧,则视频播放设备根据视频扩展信息,显示大窗口,并在大窗口内播放第一路视频,当第一路视频播放到第N帧时,继续在大窗口内播放第一路视频,以及显示小窗口,并在小窗口内播放第二路视频,第二路视频播放结束后,可以隐藏小窗口,如果第一路视频未播放结束,则继续显示大窗口,并在大窗口内播放第一路视频。第一路视频播放到第N帧之前,可以不显示小窗口。大窗口和小窗口的布局可以是用户预先设置好的,也可以系统默认的,例如,小窗口可以悬浮在大窗口上,以画中画的形式布局,在这种情况下,用户可以根据自身需求移动小窗口的位置,或者,小窗口的位置也可以是固定的。再例如,小窗口和大窗口也可以是平铺布局的,即第一路视频和第二路视频分屏显示。需要说明的是,在小窗口悬浮在大窗口上的情况下,在播放到第一路视频的第N帧之前大窗口的尺寸和播放到第一路视频的第N帧时大窗口的尺寸可以不发生变化。在大窗口和小窗口采用平铺布局的方式时,为了适应视频播放设备的显示屏的大小,在播放第一路视频的第N帧之前大窗口的尺寸与播放到第一路视频的第N帧时大窗口的尺寸可以不同,在播放到第一路视频的第N帧之前大窗口的尺寸可以大于播放到第一路视频的第N帧时大窗口的尺寸。当第二路视频播放结束时,大窗口的尺寸可以恢复播放第一路视频的第N帧之前大窗口的尺寸。In the case that the video file received from the video server is the second type of video file received by the video playback device, the video playback device can play the multi-channel video according to the video file. In some embodiments, the video playback device can adapt to the corresponding video playback layout according to the video extension information, and perform multi-channel video playback. Exemplarily, in the case where the video extension information is used to indicate the association relationship between the two videos, the video playback device plays the secondary video in the small window and the main video in the large window according to the video extension information. Take the example that the video extension information is used for the association between the first channel of video and the second channel of video. For example, the video extension information includes the extension information of the first video channel and the extension information of the second video channel, the extension information of the first channel video includes the main video ID, and the extension information of the second video channel includes the slave video ID and the slave video playback information. The associated identifier, where the main video frame identifier is used to identify the Nth frame in the main video, the video playback device displays a large window according to the video extension information, and plays the first video in the large window. At the Nth frame, continue to play the first video in the large window, display the small window, and play the second video in the small window. After the second video is played, the small window can be hidden. If the first video If the playback is not over, the large window will continue to be displayed, and the first video will be played in the large window. Before the first channel video is played to the Nth frame, the small window may not be displayed. The layout of the large window and the small window can be preset by the user or the system default. For example, the small window can be suspended on the large window and laid out in the form of picture-in-picture. The position of the widget needs to be moved, or the position of the widget can also be fixed. For another example, the small window and the large window may also be tiled, that is, the first channel video and the second channel video are displayed in a split screen. It should be noted that when the small window is suspended on the large window, the size of the large window before the Nth frame of the first channel video is played and the size of the large window when the Nth frame of the first channel video is played can be Does not change. When the large window and the small window are in a tiled layout, in order to adapt to the size of the display screen of the video playback device, the size of the large window before the Nth frame of the first channel of video is played is the same as the Nth frame of the first channel of video played. The size of the large window at frame time may be different, and the size of the large window may be larger than the size of the large window when the Nth frame of the first channel video is played. When the playback of the second channel of video ends, the size of the large window can be restored to the size of the large window before the Nth frame of the first channel of video is played.
再例如,在视频扩展信息用于指示第一路视频与第二视频的关联关系、第一路视频与第三路视频的关联关系的情况下,如果这三路视频同时播放,视频播放设备根据视频扩展信息,在第一窗口内播放第一路视频,在第二窗口内播放第二路视频,在第三窗口内播放第三路视频,第一窗口的尺寸大于第二窗口的尺寸,第一窗口的尺寸大于第三窗口的尺寸,第二窗口的尺寸与第三窗口的尺寸可以相同,也可以不同,对此不作限定,其中第二窗口、第三窗口可以悬浮在第一窗口上,或者第一窗口、第二窗口和第三窗口平铺布局等。For another example, in the case where the video extension information is used to indicate the relationship between the first video and the second video, the first video and the third video, if these three videos are played simultaneously, the video playback device will Video extension information, the first video is played in the first window, the second video is played in the second window, and the third video is played in the third window. The size of the first window is larger than the size of the second window. The size of a window is larger than the size of the third window, the size of the second window and the size of the third window can be the same or different, which is not limited, wherein the second window and the third window can be suspended on the first window, Or the first window, the second window and the third window are tiled layout and so on.
进一步的,在一些实施例中,图3所示的网络架构还可以包括视频处理设备,其中,视频文件可以是由视频处理设备生成,上传到视频服务器的。示例的,视频处理设备响应于用户基于自身安装的具有视频整合功能的应用(例如相机应用、图库应用或视频应用等)的操作,将至少两个第一类视频文件整合为一个第二类视频文件。示例的,视频处理设备与视频播放设备可以为同一设备,也可以为不同的设备,对此不作限定。在视频处理设备与视频播放设备为同一设备的情况下,在另一些实施例中,当视频处理设备将至少两个第一类视频文件整合为一个第二类视频文件后,还可以将该第二类视频文件保存在本地,即自身的内部存储器、或者与自身连接的外部存储器中。如果视频处理设备和视频播放设备为同一设备,则视频播放设备还可以响应于用户对本地的第二类视频文件的打开操作,基于该第二类视频文件,进行视频的播放。或者,当视频处理设备将至少两个第一类视频文件整合为一个第二类视频文件后,还可以将该第二类视频文件上传到网盘或视频服务器等,以便于保存。Further, in some embodiments, the network architecture shown in FIG. 3 may further include a video processing device, wherein the video file may be generated by the video processing device and uploaded to the video server. Exemplarily, the video processing device integrates at least two video files of the first type into a video of the second type in response to the user's operation based on a self-installed application with a video integration function (such as a camera application, a gallery application, or a video application, etc.). document. For example, the video processing device and the video playback device may be the same device or different devices, which are not limited. In the case where the video processing device and the video playback device are the same device, in other embodiments, after the video processing device integrates at least two first-type video files into one second-type video file, the The second type of video file is stored locally, that is, in its own internal storage, or in an external storage connected to itself. If the video processing device and the video playback device are the same device, the video playback device can also play the video based on the second type of video file in response to the user's opening operation on the local second type of video file. Alternatively, after the video processing device integrates at least two first-type video files into one second-type video file, the second-type video file can also be uploaded to a network disk or a video server for easy storage.
当然,在本申请的另一些实施例中,在视频处理设备不为视频服务器的情况下,视频文件也可以是由视频服务器生成的。例如,视频处理设备响应于用户基于自身安装的具有视频整合功能的应用的操作,向视频服务器发送视频文件整合请求,其中视频文件整合请求中包括至少两个第一类视频文件和与该至少两个第一类视频文件分别对应的视频的关 联关系。视频服务器接收到来自视频处理设备的视频文件整合请求,根据视频文件整合请求中的至少两个第一类视频文件和与该至少两个第一类视频文件分别对应的视频的关联关系,将至少两个第一视频文件整合为一个第二类视频文件。Of course, in other embodiments of the present application, in the case where the video processing device is not a video server, the video file may also be generated by a video server. For example, the video processing device sends a video file integration request to the video server in response to the user's operation based on the application with the video integration function installed by itself, wherein the video file integration request includes at least two first type video files and the at least two video files. The association relationship between the videos corresponding to the first type of video files respectively. The video server receives the video file integration request from the video processing device, and according to the association relationship between the at least two first-type video files in the video file integration request and the videos corresponding to the at least two first-type video files, at least The two first video files are integrated into one second type video file.
图4A示出了本申请实施例的视频处理设备的一种结构,包括获取模块401A、编码模块402A、关联模块403A和封装模块404A。进一步的,在一些实施例中,视频处理设备还可以包括剪辑模块405A。FIG. 4A shows a structure of a video processing device according to an embodiment of the present application, including an acquisition module 401A, an encoding module 402A, an association module 403A, and an encapsulation module 404A. Further, in some embodiments, the video processing device may further include a clipping module 405A.
其中,获取模块401A用于获取L个第一类视频文件。L为大于或等于2的正整数。其中,该L个第一类视频文件是预先录制好的视频文件,可以存储在视频处理设备上,也可以存储在网盘或服务器上,对此不作限定。示例的,获取模块401A用于接收用户基于具有视频整合功能的应用选择视频文件的第一操作,并响应于第一操作,获取主视频对应的第一类视频文件。获取模块401A还用于接收用户基于具有视频整合功能的应用选择视频文件的第二操作,并响应于第二操作,获取从视频对应的第一类视频文件。L的取值与用户选择的从视频对应第一类视频文件的个数有关。又示例的,获取模块401A用于接收视频文件整合请求,从视频文件整合请求中获取L个第一类视频文件,其中L个第一类视频文件中包括一个主视频对应的第一类视频文件和L-1个从视频对应的第一类视频文件。Wherein, the obtaining module 401A is used to obtain L first-type video files. L is a positive integer greater than or equal to 2. The L video files of the first type are pre-recorded video files, which can be stored on a video processing device, or on a network disk or server, which is not limited. Exemplarily, the obtaining module 401A is configured to receive a first operation of a user selecting a video file based on an application with a video integration function, and in response to the first operation, obtain a first type of video file corresponding to the main video. The obtaining module 401A is further configured to receive a second operation of the user selecting a video file based on an application with a video integration function, and in response to the second operation, obtain a first type of video file corresponding to the slave video. The value of L is related to the number of the first-type video files corresponding to the secondary video selected by the user. In another example, the acquisition module 401A is configured to receive a video file integration request, and obtain L first-type video files from the video file integration request, wherein the L first-type video files include a first-type video file corresponding to a main video. and L-1 slave videos corresponding to the first category of video files.
编码模块402A用于对与上述L个第一类视频文件分别对应的视频进行编码压缩,得到L路标准视频码流。The encoding module 402A is configured to encode and compress the videos corresponding to the above-mentioned L video files of the first type respectively, so as to obtain L standard video streams.
关联模块403A用于生成视频扩展信息,视频扩展信息用于指示与W个第一类视频文件分别对应的视频的关联关系。该W个第一类视频文件为上述L个第一类视频文件中的视频文件,2≤W≤L、且W为正整数。示例的,关联模块403A可以响应于用户完成建立与上述W个第一类视频文件分别对应的视频的关联关系的操作,生成视频扩展信息。再示例的,关联模块403A还可以根据视频文件整合请求中的L个第一类视频文件对应的视频的关联关系,生成视频扩展信息。The association module 403A is used to generate video extension information, where the video extension information is used to indicate the association relationship between the videos corresponding to the W first-type video files respectively. The W first-type video files are video files in the above-mentioned L first-type video files, 2≤W≤L, and W is a positive integer. Exemplarily, the association module 403A may generate the video extension information in response to the user completing the operation of establishing the association relationship between the videos corresponding to the above-mentioned W first-type video files respectively. As another example, the association module 403A may also generate video extension information according to the association relationship of the videos corresponding to the L first-type video files in the video file integration request.
封装模块404A用于根据W路标准视频码流和视频扩展信息,得到一个第二类视频文件。The encapsulation module 404A is configured to obtain a second-type video file according to the W standard video streams and video extension information.
剪辑模块405A用于从L个第一类视频文件中筛选出W个第一类视频文件。示例的,剪辑模块405A用于响应于用户从L个第一类视频文件中选中W第一类视频文件建立视频关联关系的操作,从L个第一类视频文件中筛选出W个第一类视频文件。The editing module 405A is used to filter out the W first-type video files from the L first-type video files. Exemplarily, the editing module 405A is configured to filter out W first-type video files from L first-type video files in response to an operation in which the user selects W first-type video files to establish a video association relationship from L first-type video files. video file.
图4B示出了本申请实施例的视频播放设备的一种结构,包括获取模块401B、解封装模块402B、解码模块403B、关联模块404B和播放模块405B。FIG. 4B shows a structure of a video playback device according to an embodiment of the present application, including an acquisition module 401B, a decapsulation module 402B, a decoding module 403B, an association module 404B, and a playback module 405B.
获取模块401B用于获取第二类视频文件。在一些实施例中,获取模块401B用于响应于接收用户选中播放某一视频的操作,获取第二类视频文件。The acquiring module 401B is used to acquire the second type of video files. In some embodiments, the obtaining module 401B is configured to obtain the second type of video file in response to receiving an operation that the user selects to play a certain video.
解封装模块402B用于对第二类视频文件解封装,得到视频元数据、多路视频的媒体数据(例如P路视频的视频数据和Q路视频的音频数据,P、Q为正整数,且P≥2,1≤Q≤P,Q路视频为P路视频中的一路或多路视频)。其中,对于一路视频的音频数据来说,可以包括一个或多个音轨的音频数据。例如,在一路视频对应中文音轨和英文音轨的情况下,该路视频的音频数据可以包括中文音轨的音频数据和英文音轨的音频数据。The decapsulation module 402B is used to decapsulate the second type of video files to obtain video metadata, media data of multi-channel videos (such as video data of P-channel video and audio data of Q-channel video, P and Q are positive integers, and P≥2, 1≤Q≤P, the Q channel video is one or more channels of the P channel video). Wherein, for the audio data of one channel of video, the audio data of one or more audio tracks may be included. For example, in the case that a channel of video corresponds to a Chinese audio track and an English audio track, the audio data of the video channel may include audio data of the Chinese audio track and audio data of the English audio track.
解码模块403B用于多路视频的媒体数据进行解码。以多路视频的媒体数据为P路视频的视频数据和Q路视频的音频数据为例,解码模块403B用于对P路视频的视频数据和Q路视频的音频数据进行解码。例如,以某一路视频的音频数据包括中文音轨的音频数据 和英文音轨的音频数据为例,在这种情况下,解码模块403B可以根据视频的声音播放设置,对相应音轨的音频数据进行解码。比如,视频的声音播放设置为中文,则解码模块403B对中文音轨的音频数据进行解码,从而视频在播放时,使用中文播放声音。示例的,视频的声音设置可以是用户根据自身需要设置的,也可以是默认的,对此不作限定。The decoding module 403B is used for decoding the media data of the multi-channel video. Taking the media data of the multi-channel video as the video data of the P-channel video and the audio data of the Q-channel video as an example, the decoding module 403B is configured to decode the video data of the P-channel video and the audio data of the Q-channel video. For example, take the audio data of a certain channel of video including the audio data of the Chinese audio track and the audio data of the English audio track as an example, in this case, the decoding module 403B can set the audio data of the corresponding audio track to decode. For example, if the sound playback of the video is set to Chinese, the decoding module 403B decodes the audio data of the Chinese audio track, so that when the video is played, the sound is played in Chinese. For example, the sound setting of the video may be set by the user according to the user's own needs, or may be the default, which is not limited.
关联模块404B用于根据视频元数据中的视频扩展信息,得到多路视频的帧关联信息。The association module 404B is configured to obtain the frame association information of the multi-channel video according to the video extension information in the video metadata.
播放模块405B用于根据多路视频的帧关联信息,播放相应的视频。示例的,播放模块405B用于根据多路视频的帧关联信息,确定视频播放窗口的个数,并根据视频播放窗口的个数自适应相应的视频播放窗口的布局,然后,在相应的视频播放窗口内进行视频的播放。需要说明的是,与视频播放窗口的个数对应的视频播放窗口的布局可以是用户根据自身需要设置的,也可以是系统默认的,对此不作限定。The playing module 405B is configured to play the corresponding video according to the frame association information of the multi-channel video. Exemplarily, the playback module 405B is used to determine the number of video playback windows according to the frame associated information of the multi-channel video, and to adapt the layout of the corresponding video playback windows according to the number of video playback windows, and then, in the corresponding video playback The video is played in the window. It should be noted that the layout of the video playback windows corresponding to the number of video playback windows may be set by the user according to their own needs, or may be defaulted by the system, which is not limited.
以视频处理设备为手机、且具有视频整合功能的应用为第一应用为例,对本申请实施例将至少两个第一类视频文件整合为一个第二类视频文件进行说明。应理解,本申请实施例中具有视频整合功能的第一应用可以通过在图库应用功能、视频应用、相机应用等原生应用(native application)中增加视频整合功能实现。或者,本申请实施例中具有视频整合功能的第一应用也可以为第三方应用,对此不作限定。需要说明的是,在本申请实施例中,第三方应用可以理解为用户根据自身需求从应用市场APP、或网络上下载到手机上的应用。Taking the video processing device as a mobile phone and an application with a video integration function as the first application as an example, the embodiment of the present application integrates at least two first-type video files into one second-type video file. It should be understood that the first application having the video integration function in the embodiment of the present application may be implemented by adding a video integration function to native applications such as a gallery application function, a video application, and a camera application. Alternatively, the first application having the video integration function in the embodiment of the present application may also be a third-party application, which is not limited. It should be noted that, in the embodiments of the present application, a third-party application may be understood as an application downloaded by a user from an application market APP or a network to a mobile phone according to his/her own needs.
如图5所示,手机显示界面500,界面500包括图标501。图标501用于标识第一应用。响应于用户点击图标501,手机显示第一应用的界面。例如,第一应用的界面可以为界面510,包括视频预览框01、视频预览框02和视频预览框03、选项502、选项503和选项504。视频预览框01用于预览主视频,视频预览框02用于预览从视频1,视频预览框02用于预览从视频,选项502为与视频预览框01对应的选项,用于选择主视频对应的第一类视频文件,选项503为与视频预览框02对应的选项,用于选择从视频1对应的第一类视频文件,选项504为与视频预览框03对应的选项,用于选择从视频2对应的第一类视频文件。在一些实施例中,用户可以根据自身需要增加或删除第一应用的界面上从视频的视频预览框的个数。需要说明的是,第一应用的界面上视频预览框的个数最多不超过第一应用支持的用于视频整合的视频文件的最大个数,且第一应用的界面上视频预览框的个数最少为1个。其中,用于视频整合的第一类视频文件的最大个数可以是用户根据自身需要预定义的,也可以是研发人员在程序开发时预置好的,对此不作限定。例如,第一应用最大支持将4个第一类视频文件整合为与一个第二类视频文件,在这种情况下,第一应用的界面上最多包括4个视频预览框,其中一个视频预览框用于预览主视频,另外3个视频预览框分别用于预览从视频。As shown in FIG. 5 , the mobile phone displays an interface 500 , and the interface 500 includes an icon 501 . Icon 501 is used to identify the first application. In response to the user clicking on the icon 501, the mobile phone displays the interface of the first application. For example, the interface of the first application may be interface 510 , including video preview box 01 , video preview box 02 , and video preview box 03 , option 502 , option 503 , and option 504 . The video preview frame 01 is used to preview the main video, the video preview frame 02 is used to preview the slave video 1, the video preview frame 02 is used to preview the slave video, and the option 502 is an option corresponding to the video preview frame 01, which is used to select the corresponding video of the main video. The first type of video file, option 503 is an option corresponding to the video preview frame 02, used to select the first type of video file corresponding to the video 1, and option 504 is an option corresponding to the video preview frame 03, used to select from the video 2. The corresponding first category video file. In some embodiments, the user can increase or delete the number of video preview frames of the slave video on the interface of the first application according to his own needs. It should be noted that the number of video preview boxes on the interface of the first application does not exceed the maximum number of video files supported by the first application for video integration, and the number of video preview boxes on the interface of the first application At least 1. The maximum number of the first type of video files used for video integration may be predefined by the user according to their own needs, or may be preset by the R&D personnel during program development, which is not limited. For example, the first application supports a maximum of four first-type video files to be integrated into one second-type video file. In this case, the interface of the first application includes at most four video preview boxes, one of which is a video preview box. Used to preview the master video, and the other 3 video preview boxes are used to preview the slave video respectively.
示例的,如图6所示,响应于用户点击选项502的操作,显示界面600。界面600包括至少两个第一类视频文件的选项,例如选项601、选项602、选项603和选项604。其中选项601、选项602、选项603和选项604分别用于标识一个第一类视频文件。需要说明的是,选项601、选项602、选项603和选项604分别标识的第一类视频文件可以存储在手机上,也可以存储在网盘或服务器上,对此不作限定。进一步的,响应于用户点击选项601的操作,手机返回界面510,在视频预览区域01内显示选项601标识的第一类视频文件对应的视频。Illustratively, as shown in FIG. 6, in response to a user clicking on option 502, interface 600 is displayed. Interface 600 includes options for at least two first category video files, such as option 601 , option 602 , option 603 , and option 604 . The option 601, the option 602, the option 603 and the option 604 are respectively used to identify a first type of video file. It should be noted that the first type of video files respectively identified by option 601, option 602, option 603 and option 604 may be stored on the mobile phone, or may be stored on a network disk or server, which is not limited. Further, in response to the user's operation of clicking option 601, the mobile phone returns to interface 510, and displays the video corresponding to the first type of video file identified by option 601 in the video preview area 01.
又示例的,如图7所示,响应于用户点击503的操作,手机显示界面600。响应于用户点击选项602的操作,手机返回界面510,在视频预览框02内显示选项602标识的第一 类视频文件对应的视频。在一些实施例中,界面510包括选项701,选项710用于设置从视频1开始播放时主视频的播放位置。示例的,响应于用户点击选项701的操作,手机显示提示框710,提示框710用于提示用户设置从视频1开始播放时,主视频的播放位置。例如,提示框710包括选项711、选项712、选项713和选项714。选项711用于设置主视频的具体播放位置信息,选项712用于设置标识主视频播放位置的单位,例如帧、秒、小时、分钟等,选项713用于取消设置,选项714用于确认设置。例如,响应于对选项711和712的操作,手机将从视频1开始播放时主视频的播放位置设置为第N帧,响应于用户点击选项714的操作,手机返回界面510。当然可以理解的是,标识主视频播放位置的单位还可以为毫秒等其它单位,对此不作限定。例如,以用户通过选项711和选项712设置的从视频1开始播放时主视频的播放位置为主视频的第N帧,则手机根据选项711和选项712的设置,生成从视频1与主视频的关联关系。应理解,用户设置从视频2与主视频的关联方式与从视频1与主视频的关联方式类似,例如,可以通过选项504选择从视频2对应的第一类视频文件,通过从视频2对应的关联设置选项来设置从视频2开始播放时主视频的播放位置。需要说明的是,在本申请实施例中,与从视频对应的关联设置选项,例如701可以是用户选择从视频对应的视频文件后显示的,也可以在用户未选择从视频对应的视频文件在界面510上显示,对此不作限定。In another example, as shown in FIG. 7 , in response to the operation of clicking 503 by the user, the mobile phone displays the interface 600 . In response to the user's operation of clicking option 602, the mobile phone returns to interface 510, and displays the video corresponding to the first type of video file identified by option 602 in the video preview frame 02. In some embodiments, interface 510 includes option 701 for setting the playback position of the main video when video 1 starts playing. For example, in response to the user's operation of clicking option 701, the mobile phone displays a prompt box 710, where the prompt box 710 is used to prompt the user to set the playback position of the main video when the video 1 starts playing. For example, prompt box 710 includes option 711 , option 712 , option 713 , and option 714 . Option 711 is used to set the specific playback position information of the main video, option 712 is used to set the unit for identifying the playback position of the main video, such as frame, second, hour, minute, etc. Option 713 is used to cancel the setting, and option 714 is used to confirm the setting. For example, in response to the operation of options 711 and 712, the mobile phone sets the playback position of the main video from video 1 to the Nth frame, and in response to the operation of the user clicking option 714, the mobile phone returns to the interface 510. Of course, it can be understood that the unit for identifying the playback position of the main video may also be other units such as milliseconds, which is not limited. For example, taking the Nth frame of the main video when the main video starts to play from video 1 set by the user through options 711 and 712, the mobile phone generates the sub-video 1 and the main video according to the settings of options 711 and 712. connection relation. It should be understood that the association between the secondary video 2 and the main video set by the user is similar to the association between the secondary video 1 and the primary video. For example, the first type of video file corresponding to the secondary video Associate the setting option to set the playback position of the main video when starting from video 2. It should be noted that, in this embodiment of the present application, the associated setting options corresponding to the slave video, such as 701, may be displayed after the user selects the video file corresponding to the slave video, or may be displayed when the user does not select the video file corresponding to the slave video. It is displayed on the interface 510, which is not limited.
进一步的,在本申请的一些实施例中,响应于用户点击选项502的操作,手机显示视频文件存储位置界面,视频文件存储位置界面包括至少一个视频文件的存储位置选项。例如,视频文件存储位置界面可以为图8所示的界面800,包括选项801、选项802和选项803,选项801用于指示视频文件存储位置为图库应用,选项802用于指示视频文件存储位置为云盘。响应于用户点击选项801或选项802的操作,手机显示界面600。从而便于用户进行视频文件的选择。Further, in some embodiments of the present application, in response to the user's operation of clicking option 502, the mobile phone displays a video file storage location interface, and the video file storage location interface includes at least one video file storage location option. For example, the video file storage location interface can be the interface 800 shown in FIG. 8, including option 801, option 802 and option 803, option 801 is used to indicate that the video file storage location is a gallery application, and option 802 is used to indicate that the video file storage location is cloud disk. In response to the user clicking on option 801 or option 802, the phone displays interface 600. Thus, it is convenient for the user to select the video file.
此外,又示例的,当用户在第一应用的界面设置完成后,如图9A所示,响应于用户点击选项900的操作,手机将主视频对应的第一类视频文件、从视频1对应的第一类视频文件、从视频2对应的第一类视频文件整合为一个第二类视频文件,该第二类视频文件包括视频扩展信息,该视频扩展信息用于指示主视频与从视频1的关联关系、以及主视频与从视频2的关联关系。又示例的,当用户在第一应用的界面设置完成后,如图9B所示,响应于用户点击选项900的操作,手机将主视频对应的第一类视频文件、从视频1对应的第一类视频文件整合为一个第二类视频文件,该第二类视频文件包括视频扩展信息,该视频扩展信息用于指示主视频与从视频1的关联关系。又示例的,当用户在第一应用的界面设置完成后,如图9A所示,用户虽然选择了从视频2对应的第一类视频文件,但是未设置从视频2开始播放时主视频的播放位置,因此响应于用户点击选项900的操作,手机将主视频对应的第一类视频文件、从视频1对应的第一类视频文件整合为一个第二类视频文件,该第二类视频文件包括视频扩展信息,该视频扩展信息用于指示主视频与从视频1的关联关系。In addition, in another example, after the user completes the settings on the interface of the first application, as shown in FIG. 9A , in response to the user's operation of clicking option 900, the mobile phone converts the first type of video file corresponding to the main video and the video file corresponding to the secondary video 1 The first type of video file and the first type of video file corresponding to the slave video 2 are integrated into a second type of video file, and the second type of video file includes video extension information, which is used to indicate the main video and the slave video 1. The relationship, and the relationship between the master video and the slave video 2. In another example, after the user completes the settings on the interface of the first application, as shown in FIG. 9B , in response to the user's operation of clicking option 900, the mobile phone stores the first type of video file corresponding to the main video and the first type of video file corresponding to the secondary video 1. The class video files are integrated into a second class video file, and the second class video file includes video extension information, and the video extension information is used to indicate the association relationship between the master video and the slave video 1 . Another example, when the user completes the settings on the interface of the first application, as shown in FIG. 9A , although the user selects the first type of video file corresponding to the sub-video 2, the playback of the main video when the sub-video 2 starts to play is not set. position, so in response to the operation of the user clicking option 900, the mobile phone integrates the first type video file corresponding to the main video and the first type video file corresponding to the slave video 1 into a second type video file, and the second type video file includes Video extension information, where the video extension information is used to indicate the association between the master video and the slave video 1.
以主视频与从视频1的关联关系为例,主视频与从视频1的关联关系可以用于指示从视频1开始播放时主视频的播放位置。示例的,视频扩展信息用于指示主视频与从视频1的关联关系的情况下,视频扩展信息可以包括主视频的扩展信息和从视频1的扩展信息,主视频的扩展信息包括主视频标识,从视频1扩展信息包括从视频标识、和主视频帧标识,主视频帧标识用于指示从视频1开始播放时主视频的播放位置。Taking the association relationship between the master video and the slave video 1 as an example, the association relationship between the master video and the slave video 1 can be used to indicate the playback position of the master video when the slave video 1 starts to play. Exemplarily, when the video extension information is used to indicate the association relationship between the main video and the slave video 1, the video extension information may include the extension information of the main video and the extension information of the slave video 1, and the extension information of the main video includes the main video identifier, The extended information of the secondary video 1 includes a secondary video ID and a primary video frame ID, and the primary video frame ID is used to indicate the playback position of the primary video when the secondary video 1 starts playing.
需要说明的是,图9A或图9B中所示的选项900可以是用户在界面510上选择一个主视频对应的第一类视频文件和一个从视频对应的第一类视频文件之后,呈现在界面510上的,也可以是响应于用户点击图标501,显示的界面510时呈现在界面510上的,对此不作限定。It should be noted that the option 900 shown in FIG. 9A or FIG. 9B may be displayed on the interface after the user selects a first-type video file corresponding to a master video and a first-type video file corresponding to a slave video on the interface 510 . 510, or may be displayed on the interface 510 when the displayed interface 510 is displayed in response to the user clicking the icon 501, which is not limited.
此外,在一些实施例中,手机得到第二类视频文件后,将该第二类视频文件上传到网盘或服务器,或者保存在本地。In addition, in some embodiments, after obtaining the second type of video file, the mobile phone uploads the second type of video file to a network disk or server, or saves it locally.
上述是以在手机上将至少两个第一类视频文件整合为一个第二类视频文件为例进行说明的,需要说明的是,本申请实施例还可以将至少两个第一类视频文件整合为一个第二类视频文件的步骤由服务器(如视频服务器)执行。例如,当用户在第一应用的界面设置完成后,如图9A所示,响应于用户点击选项900的操作,手机向服务器发送视频文件整合请求,其中,视频文件整合请求中包括主视频对应的第一类视频文件、从视频1对应的第一类视频文件、从视频2对应的第一类视频文件、从视频1开始播放时主视频的播放位置信息、从视频2开始播放时主视频的播放位置信息,服务器接收到来自手机的视频文件整合请求,根据从视频1开始播放时主视频的播放位置信息、从视频2开始播放时主视频的播放位置信息,将主视频对应的第一类视频文件、从视频1对应的第一类视频文件、从视频2对应的第一类视频文件整合为一个第二类视频文件。The above description takes the integration of at least two first-type video files into one second-type video file on a mobile phone as an example. It should be noted that in this embodiment of the present application, at least two first-type video files can also be integrated. The step for a second type of video file is performed by a server (eg, a video server). For example, after the user has completed setting on the interface of the first application, as shown in FIG. 9A , in response to the user's operation of clicking option 900, the mobile phone sends a video file integration request to the server, wherein the video file integration request includes the corresponding video files corresponding to the main video. The first type of video file, the first type of video file corresponding to the sub-video 1, the first type of video file corresponding to the sub-video 2, the playback position information of the main video when the video 1 starts playing, the main video when the video 2 starts to play. Playing position information, the server receives the video file integration request from the mobile phone, and according to the playing position information of the main video when the video 1 starts to play, and the playing position information of the main video when the video 2 starts to play, the first category corresponding to the main video is The video file, the first type video file corresponding to the slave video 1, and the first type video file corresponding to the slave video 2 are integrated into a second type video file.
应理解,上述仅为一种用户选择主视频、从视频对应的第一类视频文件、以及设置从视频开始播放时主视频的播放位置的举例说明,并不构成对本申请实施例的限定,本申请实施例中不限定用户选择主视频、从视频对应的第一类视频文件、以及设置从视频开始播放时主视频的播放位置的方式。It should be understood that the above is only an example of the user selecting the main video, the first type of video file corresponding to the sub-video, and setting the playback position of the main video when the sub-video starts to play, and does not constitute a limitation to the embodiments of the present application. The application embodiments do not limit the manner in which the user selects the master video, the first type of video file corresponding to the slave video, and sets the playback position of the master video when the slave video starts to play.
下面以视频播放设备为手机为例,结合具体场景对手机播放第二类视频文件的方式进行具体说明。Taking the video playback device as a mobile phone as an example, the following describes the manner in which the mobile phone plays the second type of video file in combination with a specific scenario.
如图10所示,手机显示界面1000,界面1000包括图标1001。图标1001用于标识图库应用。响应于用户点击图标1001,手机显示图库应用的界面。例如,图库应用的界面可以为界面1010,包括选项1011,选项1011用于标识视频文件。响应于用户点击选项1011的操作,手机根据选项1011标识的视频文件,播放相应的视频。在一些实施例中,响应于用户点击选项1011的操作,手机判断选项1011标识的视频文件是否为第二类视频文件。示例的,手机可以通过判断选项1011标识的视频文件中是否包括视频扩展信息,来判断选项1011标识的视频文件是否为第二类视频文件。例如,选项1011标识的视频文件中不包括视频扩展信息,则选项1011标识的视频文件为第一类视频文件。再例如,选项1011标识的视频文件包括视频扩展信息,则选项1011标识的视频文件为第二类视频文件。以视频文件格式为MP4为例。手机可以通过判断选项1011标识的视频文件是否包括mmmw。若选项1011标识的视频文件包括mmmw,则选项1011标识的视频文件为第二类视频文件。若选项1011标识的视频文件不包括mmmw,则选项1011标识的视频文件为第一类视频文件。As shown in FIG. 10 , the mobile phone displays an interface 1000 , and the interface 1000 includes an icon 1001 . Icon 1001 is used to identify the gallery application. In response to the user clicking on the icon 1001, the phone displays the interface of the gallery application. For example, the interface of the gallery application may be interface 1010, including option 1011, where option 1011 is used to identify a video file. In response to the user's operation of clicking option 1011, the mobile phone plays the corresponding video according to the video file identified by option 1011. In some embodiments, in response to the user's operation of clicking option 1011, the mobile phone determines whether the video file identified by option 1011 is a second type of video file. For example, the mobile phone can determine whether the video file identified by option 1011 is the second type of video file by judging whether the video file identified by option 1011 includes video extension information. For example, if the video file identified by option 1011 does not include video extension information, the video file identified by option 1011 is the first type of video file. For another example, if the video file identified by option 1011 includes video extension information, the video file identified by option 1011 is the second type of video file. Take the video file format as MP4 as an example. The mobile phone can determine whether the video file identified by option 1011 includes mmmw. If the video file identified by option 1011 includes mmmw, the video file identified by option 1011 is the second type of video file. If the video file identified by option 1011 does not include mmmw, the video file identified by option 1011 is the first type of video file.
若选项1011标识的视频文件为第一类视频文件,手机根据选项1011标识的视频文件进行视频播放,具体播放方式可以参见现有手机根据第一类视频文件播放视频的方式,在此不再赘述。If the video file identified by option 1011 is the first type of video file, the mobile phone will play the video according to the video file identified by option 1011. For the specific playback method, please refer to the existing method for the mobile phone to play video according to the first type of video file, which will not be repeated here. .
若选项1011标识的视频文件为第二类视频文件,则手机根据第二类视频文件播放至少两路视频。以选项1011标识的第二类视频文件中的视频扩展信息用于指示第一路视频与第 二路视频的关联关系为例。例如,选项1011标识的第二类视频文件中的视频扩展信息包括第一路视频的扩展信息和第二路视频的扩展信息,第一路视频的扩展信息包括主视频标识,第二路视频的扩展信息包括从视频标识和主视频帧标识,其中主视频帧标识用于标识主视频的第N帧,则响应于用户点击选项1011的操作,手机在窗口1021内播放第一路视频,当第一路视频在窗口1021内播放到第N帧时,开始在窗口1022内播放第二路视频。示例的,如图11所示,第一路视频播放到第M帧时,第二路视频播放最后一帧,第一路视频播放到第M+1帧时,第二路视频播放已结束,隐藏窗口1022,若第一路视频仍未播放结束,则继续在窗口1021内播放第一路视频。If the video file identified by option 1011 is the second type of video file, the mobile phone plays at least two channels of video according to the second type of video file. Take the video extension information in the second type of video file identified by option 1011 as an example for indicating the association relationship between the first channel of video and the second channel of video. For example, the video extension information in the second type video file identified by option 1011 includes the extension information of the first channel video and the extension information of the second channel video, the extension information of the first channel video includes the main video identifier, the second channel video extension information The extended information includes the slave video identification and the main video frame identification, wherein the main video frame identification is used to identify the Nth frame of the main video, then in response to the operation of the user clicking option 1011, the mobile phone plays the first video in the window 1021. When the video of one channel reaches the Nth frame in the window 1021, the video of the second channel starts to be played in the window 1022. For example, as shown in Figure 11, when the first channel of video is played to the Mth frame, the second channel of video plays the last frame, and when the first channel of video is played to the M+1th frame, the second channel of video playback has ended, The window 1022 is hidden, and if the video of the first channel is not finished yet, the video of the first channel continues to be played in the window 1021 .
示例的,手机根据选项1011标识的第二类视频文件中的视频扩展信息生成视频关联信息,然后,手机根据视频关联信息,播放相应的视频。视频关联信息可以如表1所示。For example, the mobile phone generates video associated information according to the video extension information in the second type of video file identified by option 1011, and then the mobile phone plays the corresponding video according to the video associated information. The video associated information may be as shown in Table 1.
表1Table 1
主视频帧序号Main video frame number 从视频序号from the video serial number
1.11.1 ----
1.21.2 ----
……... ----
1.N1.N 22
……... ----
以选项1011标识的第二类视频文件的视频文件格式为MP4为例,视频关联信息还可以如表2所示。Taking the video file format of the second type of video file identified by option 1011 as MP4 as an example, the video associated information may also be as shown in Table 2.
表2Table 2
sample indexsample index DTSDTS PTSPTS sample sizesample size offetoffet sub track idsub track id
1.11.1 T11T11 T12T12 Z1Z1 F1F1 ----
1.21.2 T21T21 T22T22 Z2Z2 F2F2 ----
……... ……... ……... ……... ……... ----
1.N1.N TN1TN1 TN2TN2 ZNZN FNFN 22
……... ……... ……... ……... ……... ----
其中,sample idex表示主视频帧序号,DTS表示主视频帧的解码时间,PTS标识主视频帧的显示时间,sample size标识主视频的帧大小,offset表示主视频对应的帧在视频文件中的位置,sub track id用于标识从视频。例如,如表2所示,sample index为1.N时,sub track id为2,即主视频播放到第N帧时,sub track id为2的从视频开始播放。Among them, sample idex indicates the main video frame number, DTS indicates the decoding time of the main video frame, PTS indicates the display time of the main video frame, sample size indicates the frame size of the main video, and offset indicates the position of the frame corresponding to the main video in the video file. , the sub track id is used to identify the slave video. For example, as shown in Table 2, when the sample index is 1.N, the sub track id is 2, that is, when the main video reaches the Nth frame, the sub-track id of 2 starts to play.
示例的,可以通过将sub track id为2的从视频的显示开始时刻设置为主视频的第N帧的显示开始时刻,以及将sub track id为2的从视频的显示时间与主视频的显示时间映射到同一时间轴上,从而将sub track id为2与主视频的第N帧关联。For example, the display start time of the slave video with the sub track id of 2 can be set as the display start time of the Nth frame of the master video, and the display time of the slave video with the sub track id of 2 and the display time of the master video can be set. Map onto the same timeline, thereby associating sub track id 2 with frame N of the main video.
在一些实施例中,手机根据视频扩展信息,确定视频播放窗口的布局。例如,手机根据视频扩展信息,确定需要播放的视频的路数,然后根据需要播放的视频的路数,确定视频播放窗口的个数,再根据视频播放窗口的个数,确定与视频播放窗口的个数对应的视频播放窗口的布局。其中,视频播放窗口的个数与需要播放的视频的路数相同。例如,手机在第一路视频播放到N帧之前,需要播放的视频仅有一路,因此视频播放窗口的个数为1,该视频播放窗口用于播放第一路视频。手机在第一路视频播放到N帧时,需要播放的视频有两路,分别为第一路视频和第二路视频,因此,视频播放窗口的个数为2,而与视频播 放窗口的个数为2对应的视频播放窗口的布局可以是用户预先设置好的,或者是图库应用默认的,例如这两个视频播放窗口分别为窗口1021和窗口1022,其布局可以为画中画的方式,如图10所示,窗口1021用于播放第一路视频,窗口1022用于播放第二路视频。或者,视频播放窗口的个数为2,窗口的布局可以为平铺方式,如图12所示,窗口1201用于播放第一路视频,窗口1202用于播放第二路视频,窗口1202位于窗口1202的下侧。需要说明的是,图12所示仅为一种两个视频播放窗口的平铺布局方式,在本申请实施例中窗口1201和窗口1201还可以左右布局等,对此不作限定。In some embodiments, the mobile phone determines the layout of the video playback window according to the video extension information. For example, the mobile phone determines the number of video channels to be played according to the video extension information, then determines the number of video playback windows according to the number of video channels to be played, and then determines the number of video playback windows according to the number of video playback windows. The layout of the video playback window corresponding to the number. The number of video playback windows is the same as the number of video channels to be played. For example, before the first channel of video is played to N frames, only one video needs to be played on the mobile phone, so the number of video playback windows is 1, and the video playback window is used to play the first channel of video. When the mobile phone plays the first video to N frames, there are two videos to be played, namely the first video and the second video. Therefore, the number of video playback windows is 2, which is different from the number of video playback windows. The layout of the video playback window corresponding to the number 2 can be preset by the user, or the default of the gallery application. For example, the two video playback windows are window 1021 and window 1022, and their layouts can be in a picture-in-picture manner. As shown in FIG. 10 , the window 1021 is used to play the first channel of video, and the window 1022 is used to play the second channel of video. Alternatively, the number of video playback windows is 2, and the layout of the windows can be tiled. As shown in FIG. 12, window 1201 is used to play the first channel of video, window 1202 is used to play the second channel of video, and window 1202 is located in the window. The underside of 1202. It should be noted that FIG. 12 shows only a tiled layout of two video playback windows. In this embodiment of the present application, the windows 1201 and 1201 may also be laid out left and right, which is not limited.
又示例的,手机当在窗口1021内播放第一路视频,在窗口1022内播放第二路视频时,在选项902标识的第二类视频文件中包括第一路视频的音频数据和第二路视频数据的情况下,手机可以默认根据主视频的音频数据,播放相应的声音;或者手机也可以默认根据从视频的音频数据,播放相应的声音。例如,手机默认根据主视频的音频数据,播放相应的声音的情况下,则手机在窗口1021内播放第一路视频,在窗口1022内播放第二路视频时,默认根据第一路视频的音频数据,播放第一路视频的声音,而第二路视频是无声或静音播放的。再例如,手机默认根据从视频的音频数据,播放相应的声音的情况下,则手机在窗口1021内播放第一路视频,在窗口1022内播放第二路视频时,默认根据第二路视频的音频数据,播放第二路视频的声音,而第一路视频是无声或静音播放的。In another example, when the mobile phone plays the first video in window 1021 and the second video in window 1022, the second type of video file identified by option 902 includes the audio data of the first video and the second video. In the case of video data, the mobile phone can play the corresponding sound by default according to the audio data of the main video; or the mobile phone can also play the corresponding sound according to the audio data of the secondary video by default. For example, if the mobile phone plays the corresponding sound according to the audio data of the main video by default, the mobile phone plays the first video in the window 1021, and when the second video is played in the window 1022, it defaults to the audio of the first video. data, the sound of the first video is played, and the second video is played silently or silently. For another example, if the mobile phone plays the corresponding sound according to the audio data from the video by default, then the mobile phone plays the first video in the window 1021, and when the second video is played in the window 1022, the default is based on the second video. Audio data, play the sound of the second video, and the first video is played without sound or mute.
进一步的,在选项1011标识的第二类视频文件中包括第一路视频的音频数据和第二路视频数据的情况下,手机在窗口1021内播放第一路视频,在窗口1022内播放第二路视频时,视频播放界面上还可以包括第一路视频的声音选项和第二路视频的声音选项,响应于用户对第一路视频的声音选项的操作,则手机播放第一路视频的声音,响应于用户对选项第二路视频的声音选项的操作,则手机播放第二路视频的声音。从而便于用户与设备交互。Further, in the case where the second type of video file identified by option 1011 includes the audio data of the first video and the second video data, the mobile phone plays the first video in the window 1021, and plays the second video in the window 1022. When the video is selected, the video playback interface may also include a sound option for the first channel video and a sound option for the second channel video. In response to the user's operation on the sound option for the first channel video, the mobile phone plays the sound of the first channel video. , and in response to the user's operation of selecting the sound option of the second channel video, the mobile phone plays the sound of the second channel video. This facilitates user interaction with the device.
示例的,在选项1011标识的第二类视频文件中仅包括第一路视频的音频数据情况下,手机在窗口1021内播放第一路视频,在窗口1022内播放第二路视频时,根据第一路视频的音频数据,播放第一路视频的声音。或者,在选项1011标识的第二类视频文件中仅包括第二路视频的音频数据情况下,手机在窗口1021内播放第一路视频,在窗口1022内播放第二路视频时,根据第二路视频的音频数据,播放第二路视频的声音。Exemplarily, in the case where only the audio data of the first channel of video is included in the second type of video file identified by option 1011, the mobile phone plays the first channel of video in window 1021, and when the second channel of video is played in window 1022, according to the first channel of video. The audio data of one video is played, and the sound of the first video is played. Or, in the case where only the audio data of the second video is included in the second type of video file identified by option 1011, the mobile phone plays the first video in the window 1021, and when the second video is played in the window 1022, according to the second video. The audio data of the channel video is played, and the sound of the second channel video is played.
如图13所示,手机显示界面1300,桌面包括图标1301。图标1301用于标识视频应用。响应于用户点击图标1301,手机显示视频应用的界面。示例的,视频应用的界面可以为界面1310,包括视频选项1311。视频选项1311用于播放视频名称为开发者大会的视频,响应于用户点击视频选项1311,手机向视频服务器发送视频获取请求,视频获取请求用于请求获取视频名称为开发者大会对应的视频文件。视频服务器接收到来自手机的视频获取请求,向手机返回视频名称为开发者大会对应的视频文件。手机接收到来自视频服务器的视频名称为开发者大会对应的视频文件,根据视频名称为开发者大会对应的视频文件,进行视频播放。具体的,手机根据视频名称为开发者大会对应的视频文件,进行视频播放的方式可以参见上述手机根据选项1311用于标识视频文件进行视频播放的方式,在此不再赘述。As shown in FIG. 13 , the mobile phone displays an interface 1300 , and the desktop includes icons 1301 . Icon 1301 is used to identify a video application. In response to the user clicking on icon 1301, the phone displays the interface of the video application. For example, the interface of the video application may be interface 1310 , including video option 1311 . The video option 1311 is used to play the video whose video name is the developer conference. In response to the user clicking the video option 1311, the mobile phone sends a video acquisition request to the video server, and the video acquisition request is used to request to acquire the video file corresponding to the developer conference. The video server receives the video acquisition request from the mobile phone, and returns the video file corresponding to the developer conference to the mobile phone. The mobile phone receives the video file corresponding to the developer conference from the video server, and plays the video according to the video name is the video file corresponding to the developer conference. Specifically, the mobile phone is the video file corresponding to the developer conference according to the video name, and the way of playing the video can refer to the above-mentioned way that the mobile phone plays the video according to the option 1311 for identifying the video file, and will not be repeated here.
上述仅为视频播放设备为手机时,触发手机根据视频文件进行视频播放的举例说明,本申请实施例还可以通过其它方式触发手机根据视频文件进行视频播放,对此不作限定。例如,在视频播放设备为电视机的情况下,视频播放设备可以响应用户点击遥控器的频道选择按键,触发根据来自相应频道的服务器的视频文件,进行视频播放。The above is only an example of triggering the mobile phone to play video according to the video file when the video playing device is a mobile phone. The embodiment of the present application may also trigger the mobile phone to play the video according to the video file in other ways, which is not limited. For example, in the case where the video playback device is a TV, the video playback device may respond to the user clicking the channel selection button on the remote control to trigger video playback according to the video file from the server of the corresponding channel.
此外,在直播场景下,视频播放设备也可以通过在视频元数据中增加视频扩展信息,从而使得视频播放设备可以同时播放至少两路视频。从而满足用户在直播场景下多角度的观看需求。In addition, in a live broadcast scenario, the video playback device can also add video extension information to the video metadata, so that the video playback device can play at least two videos at the same time. Thereby, it can meet the multi-angle viewing needs of users in the live broadcast scene.
示例的,图14示出了一种直播场景的网络架构,包括至少一台摄像机(如摄像机1、摄像机2等)、导播台、推流服务器和视频播放设备。Exemplarily, FIG. 14 shows a network architecture of a live broadcast scenario, including at least one camera (eg, camera 1, camera 2, etc.), a broadcast director, a streaming server, and a video playback device.
其中,摄像机用于实时采集视频,在直播场景的网络架构中包括至少两台摄像机,不同的摄像机可以用于从不同的视角进行视频的采集。例如,在直播某一比赛时,摄像机1可以用于采集实时比赛的画面,摄像机2可以用户采集实时讲解的画面。其中,摄像机1位于机位1上,摄像机2位于机位2上。导播台用于接收来自摄像机采集的视频,并对来自摄像机的视频进行编码等处理,得到相应的视频元数据、视频数据和音频数据,并将视频元数据、视频数据和音频数据上传到推流服务器,并由推流服务器下发给相应的视频播放设备。视频播放设备根据视频元数据对视频数据和音频数据进行解码,并播放相应的视频。The camera is used to collect video in real time, and the network architecture of the live broadcast scene includes at least two cameras, and different cameras can be used to collect video from different perspectives. For example, when a certain game is broadcast live, camera 1 can be used to capture images of the real-time game, and camera 2 can be used by the user to capture images of real-time explanations. Among them, the camera 1 is located on the camera position 1, and the camera 2 is located on the camera position 2. The director station is used to receive the video collected from the camera, encode the video from the camera, obtain the corresponding video metadata, video data and audio data, and upload the video metadata, video data and audio data to the push stream server, and the streaming server delivers it to the corresponding video playback device. The video playback device decodes the video data and the audio data according to the video metadata, and plays the corresponding video.
在一些实施例中,导播台根据用户的设置,对来自摄像机的视频进行编码等处理。例如,用户设置仅直播来自一个机位(例如机位1)上的摄像机1的视频,则导播台根据用户的设置生成视频元数据,该视频元数据用于指示机位1上的摄像机1的编码方式等,具体可以参见现有协议中关于视频元数据的介绍。以及导播台对来自机位1上的摄像机1的视频进行编码等处理,得到视频数据和音频数据,并将视频元数据、视频数据和音频数据上传到推流服务器。如果用户的设置未改变,则导播台后续只需要对来自机位1上的摄像机1的视频进行编码等处理,得到视频数据和音频数据,并将视频数据和音频数据上传到推流服务器即可,无需在重新生成视频元数据。In some embodiments, the director station performs processing such as encoding the video from the camera according to the user's settings. For example, if the user sets to only broadcast the video from camera 1 on one camera position (eg camera position 1), the director generates video metadata according to the user's settings, and the video metadata is used to indicate the camera 1 on camera position 1. For details, please refer to the introduction about video metadata in the existing protocol. And the director station encodes the video from camera 1 on camera 1, obtains video data and audio data, and uploads video metadata, video data and audio data to the streaming server. If the user's settings have not changed, the director only needs to encode the video from camera 1 on camera 1 to obtain video data and audio data, and upload the video data and audio data to the streaming server. , without regenerating video metadata.
示例的,对于视频播放设备来说,视频播放设备接收到视频元数据后,判断视频元数据中是否包括视频扩展信息,若视频元数据中不包括视频扩展信息,则视频播放设备确定播放视频需要的视频播放窗口的个数为1,以及根据视频元数据对视频数据和音频数据进行解码,并在解码后,根据视频播放窗口的个数为1对应的布局,播放相应的视频。示例的,视频播放窗口的个数为1对应的布局可以为全屏,也可以不是全屏,可以是根据用户设置的,也可以是默认的,对此不作限定。For example, for a video playback device, after receiving the video metadata, the video playback device determines whether the video metadata includes video extension information. If the video metadata does not include video extension information, the video playback device determines that the video needs to be played. The number of video playback windows is 1, and the video data and audio data are decoded according to the video metadata, and after decoding, the corresponding video is played according to the layout corresponding to the number of video playback windows being 1. For example, a layout corresponding to 1 video playback window may be full screen or not, may be set according to the user, or may be default, which is not limited.
在一些实施例中,导播台响应于用户的设置,确定由直播一路视频更新为直播两路或多路视频。以直播一路视频更新为直播两路视频为例。例如,一路视频为来自机位1的摄像机的视频,一路视频为基于位于机位1的摄像机采集的在某一段内采集的视频得到的慢回放视频,则导播台更新视频元数据,视频元数据包括视频扩展信息,并将更新后的视频元数据、上述两路视频的视频数据、音频数据上传到推流服务器,并由推流服务器下发给视频播放设备。视频播放设备可以根据更新后的视频元数据,进行两路视频的播放。In some embodiments, in response to the user's setting, the broadcast director determines to update the live broadcast of one video channel to the live broadcast of two or more channels of video. Take the example of updating one channel of live video to two channels of live video. For example, one video is the video from the camera at camera position 1, and the other video is the slow playback video obtained based on the video collected by the camera located at camera position 1 in a certain segment, then the director updates the video metadata, video metadata Including video extension information, and uploading the updated video metadata, video data and audio data of the above two channels of video to the streaming server, and then the server will deliver it to the video playback device. The video playback device can play two-channel videos according to the updated video metadata.
具体的,更新后的视频扩展信息用于指示慢回放视频与来自机位1的摄像机1的视频的关联关系。例如,视频扩展信息包括来自机位1的摄像机1的视频的扩展信息和慢回放视频的扩展信息,来自机位1的摄像机1的视频的扩展信息包括主视频标识,慢回放视频的扩展信息包括从视频标识和主视频帧标识,其中主视频帧标识用于标识慢回放视频开始播放时来自机位1的摄像机的视频的播放位置。Specifically, the updated video extension information is used to indicate the association relationship between the slow playback video and the video from camera 1 of camera 1 . For example, the extended video information includes extended information of the video from camera 1 of camera 1 and extended information of the slow playback video, the extended information of the video from camera 1 of camera 1 includes the main video identifier, and the extended information of the slow playback video includes The secondary video identification and the main video frame identification, wherein the main video frame identification is used to identify the playback position of the video from the camera of camera position 1 when the slow playback video starts to be played.
示例的,视频播放设备接收到视频元数据、音频数据和视频数据后,可以判断视频元数据是否包括视频扩展信息,若视频元数据包括视频扩展信息,根据视频扩展信息,确定 需要直播的视频的路数,并根据需要直播的视频的路数,确定视频播放窗口的个数。其中,视频播放窗口的个数与需要直播的视频的路数是相同的。然后,视频播放设备根据视频播放窗口的个数,确定与视频播放窗口的个数对应的布局。视频播放设备根据与视频播放窗口的个数对应的布局,播放上述两路视频。与视频播放窗口的个数对应的布局可以是用户在视频播放设备上预先设置的,也可以默认的,对此不作限定。例如,视频播放窗口的个数为2时,视频播放窗口的个数为2时对应的布局,如图15A所示,则主视频在窗口1501A内播放,从视频在窗口1502A内播放。再例如,视频播放窗口的个数为2时,视频播放窗口的个数为2时对应的布局,如图15B所示,则主视频在窗口1501B内播放,从视频在窗口1502B内播放。For example, after receiving the video metadata, audio data, and video data, the video playback device can determine whether the video metadata includes video extension information. The number of channels, and the number of video playback windows is determined according to the number of channels of the video to be broadcast live. The number of video playback windows is the same as the number of video channels to be broadcast live. Then, the video playback device determines a layout corresponding to the number of video playback windows according to the number of video playback windows. The video playback device plays the above two-channel video according to the layout corresponding to the number of video playback windows. The layout corresponding to the number of video playback windows may be preset by the user on the video playback device, or may be a default, which is not limited. For example, when the number of video playback windows is 2, and the corresponding layout when the number of video playback windows is 2, as shown in FIG. 15A , the main video is played in window 1501A, and the secondary video is played in window 1502A. For another example, when the number of video playback windows is 2, and the corresponding layout when the number of video playback windows is 2, as shown in FIG. 15B , the main video is played in window 1501B, and the secondary video is played in window 1502B.
进一步的,当从视频(即上述慢回放视频播放结束后),导播台重新更新视频元数据,并将重新更新后的视频元数据上传到推流服务器,由推流服务器发送给视频播放设备。视频播放设备接收到重新更新的视频元数据,则判断重新更新的视频元数据是否包括视频扩展信息,如果不包括视频扩展信息,则视频播放设备确定只播放一路视频,并一个视频播放窗口对应的布局,播放相应的视频。Further, when the slave video (that is, after the above-mentioned slow playback video is played), the director station re-updates the video metadata, and uploads the re-updated video metadata to the streaming server, and the streaming server sends it to the video playback device. When the video playback device receives the re-updated video metadata, it determines whether the re-updated video metadata includes the video extension information. If it does not include the video extension information, the video playback device determines to play only one video, and one video playback window corresponds to the video. layout and play the corresponding video.
应理解,上述仅是以直播的两路视频一路为来自机位1的摄像机1的视频、和慢回放视频为例进行介绍的,可以理解的是,直播的两路视频也可以为一路为来自机位1的摄像机1的视频,一路为来自机位2的摄像机2的视频,对此不作限定。当然本申请实施例不限于直播两路视频,还可以同时直播3路或更多路视频。对于直播3路或更多路视频,与直播两路视频不同的是,视频扩展信息用于指示3路或更多路视频的关联关系,其它可以参见直播两路视频时的相关介绍,在此不再赘述。It should be understood that the above only takes the two-channel live video as an example of the video from camera 1 of camera 1 and the slow playback video. The video of camera 1 of camera 1, one channel is the video of camera 2 of camera 2, which is not limited. Of course, the embodiment of the present application is not limited to live broadcast of two channels of video, and can also live broadcast of three or more channels of video at the same time. For live broadcast of 3 or more channels of video, different from live broadcast of two channels of video, the video extension information is used to indicate the association of 3 or more channels of video. For others, please refer to the relevant introduction when live broadcast of two channels of video, here No longer.
需要说明的是,上述各个实施例是以将至少两个第一类视频文件整合为一个第二类视频文件为例进行介绍的,本申请实施例的方法还可以适用于将至少一个第一类视频文件和一个第二类视频文件整合为一个新的第二类视频文件,在这种情况下,将用于视频文件整合的第二类视频文件中的主视频作为整合后的新的第二类视频文件中的主视频,将用于视频文件整合的第一类视频文件中的视频作为整合后的新的第二类视频文件中的从视频,具体实现方式可以参见将至少两个第一类视频文件整合为一个第二类视频文件的相关介绍,在此不再赘述。It should be noted that, the above-mentioned embodiments are described by taking the integration of at least two first-type video files into one second-type video file as an example, and the method of the embodiment of the present application can also be applied to at least one first-type video file. The video file and a second-class video file are integrated into a new second-class video file, in this case, the main video in the second-class video file used for video file integration is used as the integrated new second-class video file. The master video in the class video file, the video in the first class video file used for video file integration is regarded as the slave video in the new second class video file after integration. The related introduction of the integration of class video files into a second class video file will not be repeated here.
此外,在本申请实施例中,还可以将音频文件和视频文件整合为一个媒体文件,或者将至少两个音频文件整合为一个媒体文件,或者音频文件和图像文件整合为一个媒体文件。以将两个音频文件整合为一个媒体文件为例,电子设备可以根据媒体扩展信息播放由两个音频文件整合的媒体文件。例如,媒体文件包括第一路音频的音频数据、第二路音频的音频数据、以及音频扩展信息。其中,音频扩展信息包括第一路音频的扩展信息和第二路音频的扩展信息,第一路音频的扩展信息包括主音频标识,第二路音频的扩展信息包括从音频标识和从音频播放位置标识,从音频播放位置标识用于指示第二路音频开始播放时第一路音频的播放位置。以从音频播放位置标识用于标识第一路音频播放到T时刻时,开始播放第二路音频,在这种情况下,电子设备可以通过扬声器播放第一路音频,在第一路音频播放到T时刻时,通过扬声器播放第二路音频,以及继续播放第一路音频。在一些实施例中,电子设备在耳机连接时,电子设备可以通过耳机播放第一路音频,在第一路音频播放到T时刻时,通过扬声器播放第二路音频,以及继续使用耳机播放第一路音频。或者,电子设备可以通过耳机播放第一路音频,在第一路音频播放到T时刻时,通过耳机播放第二 路音频,以及继续使用耳机播放第一路音频。又或者,电子设备可以通过扬声器播放第一路音频,在第一路音频播放到T时刻时,通过耳机播放第二路音频,以及继续使用扬声器播放第一路音频。In addition, in this embodiment of the present application, an audio file and a video file may also be integrated into one media file, or at least two audio files may be integrated into one media file, or an audio file and an image file may be integrated into one media file. Taking the integration of two audio files into one media file as an example, the electronic device can play the media file integrated by the two audio files according to the media extension information. For example, the media file includes audio data of the first channel of audio, audio data of the second channel of audio, and audio extension information. Wherein, the audio extension information includes the extension information of the first channel of audio and the extension information of the second channel of audio, the extension information of the first channel of audio includes the main audio identifier, and the extension information of the second channel of audio includes the slave audio identifier and the slave audio playback position. The identifier, from the audio playback position identifier, is used to indicate the playback position of the first channel of audio when the second channel of audio starts to be played. The audio playback position identifier is used to identify when the first channel of audio is played to time T, and the second channel of audio starts to be played. In this case, the electronic device can play the first channel of audio through the speaker. At time T, the second channel of audio is played through the speaker, and the first channel of audio is continued to be played. In some embodiments, when the electronic device is connected to the headset, the electronic device can play the first audio channel through the headset, play the second audio channel through the speaker when the first audio channel reaches time T, and continue to use the headset to play the first audio channel. road audio. Alternatively, the electronic device may play the first channel of audio through the earphone, and when the first channel of audio is played to time T, the second channel of audio may be played through the earphone, and the first channel of audio may continue to be played through the earphone. Alternatively, the electronic device may play the first channel of audio through the speaker, play the second channel of audio through the headset when the first channel of audio is played at time T, and continue to use the speaker to play the first channel of audio.
应理解,上述各个实施例可以单独使用,也可以相互结合使用,以实现不同的技术效果,对此不作限定。It should be understood that the above embodiments may be used alone or in combination with each other to achieve different technical effects, which are not limited thereto.
此外,还可以将上述实施例扩展到不同媒体文件的播放场景中,从而使得电子设备可以在播放某一媒体文件时,自动拉起另一个媒体文件进行播放。以第一媒体文件和第二媒体文件为例。In addition, the above embodiments can also be extended to play scenarios of different media files, so that when playing a certain media file, the electronic device can automatically pull up another media file to play. Take the first media file and the second media file as an example.
示例的,第一媒体文件关联第一媒体文件扩展信息,第一媒体文件扩展信息包括第一媒体文件标识,第一媒体文件标识用于标识第一媒体文件,第一媒体文件扩展信息关联第二媒体文件扩展信息,第二媒体文件扩展信息包括第二媒体文件标识和第二媒体文件播放位置标识,第二媒体文件标识用于标识第二媒体文件,第二媒体文件播放位置标识用于标识第二媒体开始播放时第一媒体的第一播放位置信息,第一播放位置信息包括第一位置。Exemplarily, the first media file is associated with first media file extension information, the first media file extension information includes a first media file identifier, the first media file identifier is used to identify the first media file, and the first media file extension information is associated with the second media file. Media file extension information, the second media file extension information includes a second media file identification and a second media file playback position identification, the second media file identification is used to identify the second media file, and the second media file playback position identification is used to identify the first media file. The first play position information of the first media when the second media starts to play, and the first play position information includes the first position.
例如,第一媒体文件关联第一媒体文件扩展信息,第一媒体文件扩展信息关联第二媒体文件扩展信息可以通过将第一媒体文件扩展信息和第二媒体文件扩展信息增加到第一媒体文件中实现,也可以通过建立相应的关联关系实现,对此不作限定。For example, the extension information of the first media file is associated with the extension information of the first media file, and the extension information of the first media file is associated with the extension information of the second media file by adding the extension information of the first media file and the extension information of the second media file to the first media file The realization can also be realized by establishing a corresponding association relationship, which is not limited.
以第一播放位置信息包括第一位置为例。电子设备响应于播放第一媒体文件的操作,播放第一媒体文件,当第一媒体文件播放到第一位置时,开始播放第二媒体文件。其中,电子设备播放第二媒体文件时,继续播放第一媒体文件,即电子设备从第一媒体文件播放到第一位置时,可以同时播放第一媒体文件和第二媒体文件。例如,播放第一媒体文件的操作可以为用户对第一媒体文件的操作,或者用户对用于指示第一媒体文件的控件的操作、或者某一快捷操作、语音指令等,对此不作限定。Take the first playback position information including the first position as an example. The electronic device plays the first media file in response to the operation of playing the first media file, and starts playing the second media file when the first media file is played to the first position. Wherein, when the electronic device plays the second media file, it continues to play the first media file, that is, when the electronic device plays the first media file to the first position, the first media file and the second media file can be played simultaneously. For example, the operation of playing the first media file may be the user's operation on the first media file, or the user's operation on a control for indicating the first media file, or a certain shortcut operation, voice command, etc., which is not limited.
示例的,第一媒体文件和第二媒体文件可以均为视频文件。在这种情况下,当第一媒体文件和第二媒体文件同时播放时,第一媒体文件和第二媒体文件可以分屏播放。例如,当第一媒体文件和第二媒体文件同时播放时,电子设备可以将显示屏分屏为第一屏和第二屏,并在第一屏上播放第一媒体文件,在第二屏上播放第二媒体文件。或者,电子设备在播放到第一位置时,在显示屏上显示一个窗口,在该窗口内显示第二媒体文件对应的画面。该窗口悬浮在第一媒体文件对应的画面之上,即用于播放第二媒体文件对应的画面的窗口悬浮在与用于播放第一媒体文件对应的画面的窗口之上。For example, the first media file and the second media file may both be video files. In this case, when the first media file and the second media file are played simultaneously, the first media file and the second media file can be played in a split screen. For example, when the first media file and the second media file are played at the same time, the electronic device can split the display screen into the first screen and the second screen, and play the first media file on the first screen, and play the first media file on the second screen. Play the second media file. Alternatively, when the electronic device plays to the first position, a window is displayed on the display screen, and a picture corresponding to the second media file is displayed in the window. The window is suspended above the picture corresponding to the first media file, that is, the window used for playing the picture corresponding to the second media file is suspended above the window used for playing the picture corresponding to the first media file.
又示例的,第一媒体文件为视频文件、第二媒体文件为音频文件。在这种情况下,当第一媒体文件和第二媒体文件同时播放时,电子设备通过显示屏静音播放第一媒体文件,通过扬声器播放第二媒体文件。在一些实施例中,如果电子设备与耳机连接,电子设备可以通过耳机播放第二媒体文件,电子设备通过扬声器非静音播放第一媒体文件,即第一媒体文件的声音通过扬声器输出。In another example, the first media file is a video file, and the second media file is an audio file. In this case, when the first media file and the second media file are played at the same time, the electronic device plays the first media file muted through the display screen, and plays the second media file through the speaker. In some embodiments, if the electronic device is connected to a headset, the electronic device can play the second media file through the headset, and the electronic device plays the first media file through the speaker without mute, that is, the sound of the first media file is output through the speaker.
又示例的,第一媒体文件为音频文件、第二媒体文件为视频文件。在这种情况下,当第一媒体文件和第二媒体文件同时播放时,电子设备通过扬声器播放第一媒体文件,通过显示屏静音播放第二媒体文件。在一些实施例中,如果电子设备与耳机连接,电子设备可以通过耳机非静音播放第二媒体文件。即第二媒体文件的声音通过耳机输出。In another example, the first media file is an audio file, and the second media file is a video file. In this case, when the first media file and the second media file are played at the same time, the electronic device plays the first media file through the speaker, and plays the second media file through the display screen with mute. In some embodiments, if the electronic device is connected to the headset, the electronic device can play the second media file through the headset without mute. That is, the sound of the second media file is output through the earphone.
或者,又示例的,第一媒体文件和第二媒体文件均为音频文件。当第一媒体文件和第二媒体文件同时播放时,如果电子设备与耳机连接,电子设备可以通过扬声器播放第一媒 体文件,通过耳机播放第二媒体文件。或者,电子设备可以通过耳机播放第一媒体文件,通过扬声器播放第二媒体文件。在一些实施例中,当第一媒体文件和第二媒体文件同时播放时,如果电子设备与耳机未连接,则电子设备可以通过扬声器同时播放第一媒体文件和第二媒体文件。Or, in another example, the first media file and the second media file are both audio files. When the first media file and the second media file are played at the same time, if the electronic device is connected to the headset, the electronic device can play the first media file through the speaker and play the second media file through the headset. Alternatively, the electronic device may play the first media file through an earphone and play the second media file through a speaker. In some embodiments, when the first media file and the second media file are played simultaneously, if the electronic device is not connected to the headset, the electronic device can simultaneously play the first media file and the second media file through the speaker.
在一些实施例中,第一媒体文件扩展信息还可以关联第三媒体文件扩展信息。第三媒体文件扩展信息包括第三媒体文件标识和第三媒体文件播放位置标识。第三媒体文件标识用于标识第三媒体文件,第三媒体文件播放位置标识用于标识第三媒体开始播放时第一媒体的第二播放位置信息。以第二播放位置信息包括第二位置为例。在这种情况下,电子设备当第一媒体文件播放到第二位置时,开始播放第三媒体文件,并继续播放第一媒体文件。即本申请实施例对第一媒体文件扩展信息关联的媒体文件扩展信息的个数不作限定。In some embodiments, the first media file extension information may also be associated with the third media file extension information. The third media file extension information includes the third media file identifier and the third media file playing position identifier. The third media file identifier is used to identify the third media file, and the third media file playback position identifier is used to identify the second playback position information of the first media when the third media starts to play. Take the second playback position information including the second position as an example. In this case, when the first media file is played to the second position, the electronic device starts to play the third media file and continues to play the first media file. That is, the embodiment of the present application does not limit the number of media file extension information associated with the first media file extension information.
例如,第一媒体文件中可以包括第一媒体文件扩展信息、第二媒体文件扩展信息和第三媒体文件扩展信息。For example, the first media file may include first media file extension information, second media file extension information and third media file extension information.
例如,如图16所示,电子设备显示界面1600,界面1600包括选项1601和选项1602,选项1601用于标识视频文件1,选项1602用于标识视频文件2,视频文件1和视频文件1在本地保存,也可以在服务器侧保存,对此不作限定。视频文件1关联第一视频文件扩展信息,第一视频文件扩展信息关联第二视频文件扩展信息,第一视频文件扩展信息包括第一视频文件标识,第一视频文件标识用于标识视频文件1,第二视频扩展信息包括第二视频文件标识和第二视频文件播放位置标识,第二视频文件标识用于标识视频文件2,第二视频文件播放位置标识用于标识T时刻,在这种情况下,电子设备响应于用户点击选项1601,在窗口1611内播放视频文件1,例如,视频文件1播放到T-1时刻时窗口1611内显示的视频文件1对应的画面。当视频文件1播放到T时刻时,电子设备显示窗口1612,并在窗口1612内播放视频文件2,然后继续在窗口1611内播放视频文件1。进一步的,在一些实施例中,如图16所示,当视频文件1播放到M时刻,视频文件2播放完毕,二视频文件1仍未播放完毕,则电子设备在窗口1611内继续播放视频文件1。For example, as shown in FIG. 16, the electronic device displays an interface 1600, the interface 1600 includes option 1601 and option 1602, option 1601 is used to identify video file 1, option 1602 is used to identify video file 2, video file 1 and video file 1 are local It can also be saved on the server side, which is not limited. The video file 1 is associated with the extension information of the first video file, the extension information of the first video file is associated with the extension information of the second video file, the extension information of the first video file includes the first video file identifier, and the first video file identifier is used to identify the video file 1, The second video extension information includes a second video file identification and a second video file playback position identification, the second video file identification is used to identify the video file 2, and the second video file playback position identification is used to identify time T, in this case , the electronic device plays the video file 1 in the window 1611 in response to the user clicking on the option 1601. For example, when the video file 1 is played to time T-1, the screen corresponding to the video file 1 displayed in the window 1611. When the video file 1 is played to time T, the electronic device displays the window 1612, and plays the video file 2 in the window 1612, and then continues to play the video file 1 in the window 1611. Further, in some embodiments, as shown in FIG. 16 , when the video file 1 is played to time M, the video file 2 has been played, and the two video files 1 have not been played, then the electronic device continues to play the video file in the window 1611. 1.
应理解,上述各个实施例可以单独使用,也可以相互结合使用,以实现不同的技术效果,对此不作限定。It should be understood that the above embodiments may be used alone or in combination with each other to achieve different technical effects, which are not limited thereto.
上述本申请提供的实施例中,从电子设备作为执行主体的角度对本申请实施例提供的方法进行了介绍。为了实现上述本申请实施例提供的方法中的各功能,电子设备可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。In the above-mentioned embodiments of the present application, the methods provided by the embodiments of the present application are introduced from the perspective of an electronic device as an execution subject. In order to implement the functions in the methods provided by the above embodiments of the present application, the electronic device may include a hardware structure and/or software modules, and implement the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether one of the above functions is performed in the form of a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraints of the technical solution.
本申请实施例还提供了一种视频播放装置,如图17所示,包括一个或多个处理器1701、一个或多个存储器1702。存储器1702中存储有一个或多个计算机程序,当该一个或多个计算机程序被处理器1701执行时,使得视频播放设备执行本申请实施例提供的视频播放方法。An embodiment of the present application further provides a video playback apparatus, as shown in FIG. 17 , including one or more processors 1701 and one or more memories 1702 . One or more computer programs are stored in the memory 1702, and when the one or more computer programs are executed by the processor 1701, the video playback device can execute the video playback method provided by the embodiments of the present application.
进一步的,在一些实施例中,视频播放装置还可以包括收发器1703,用于通过传输介质和其它设备进行通信,从而用于视频播放装置可以和其它设备进行通信。示例性地,收发器1703可以是通信接口、电路、总线、模块等,该其它设备可以是终端或服务器等。示例性地,收发器1703可以用于向视频服务器发送视频获取请求、接收视频文件等。Further, in some embodiments, the video playback apparatus may further include a transceiver 1703 for communicating with other devices through a transmission medium, so that the video playback apparatus may communicate with other devices. Illustratively, the transceiver 1703 may be a communication interface, a circuit, a bus, a module, or the like, and the other device may be a terminal, a server, or the like. Exemplarily, the transceiver 1703 may be used to send a video acquisition request to a video server, receive a video file, and the like.
在另一些实施例中,视频播放装置还可以包括显示屏1704,用于显示需要播放的视频。 示例的,显示屏1704包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。In other embodiments, the video playback device may further include a display screen 1704 for displaying the video to be played. Illustratively, display screen 1704 includes a display panel. The display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light). emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
此外,本申请实施例中的视频播放装置还可以包括扬声器、触摸传感器等,对此不作限定。In addition, the video playback device in the embodiment of the present application may further include a speaker, a touch sensor, etc., which is not limited.
本申请实施例中不限定上述处理器1701、存储器1702、收发器1703和显示屏1704之间的连接介质。例如,本申请实施例中处理器1701、存储器1702、收发器1703和显示屏1704之间可以通过总线连接,所述总线可以分为地址总线、数据总线、控制总线等。The connection medium between the processor 1701, the memory 1702, the transceiver 1703, and the display screen 1704 is not limited in this embodiment of the present application. For example, in this embodiment of the present application, the processor 1701, the memory 1702, the transceiver 1703, and the display screen 1704 may be connected through a bus, and the bus may be divided into an address bus, a data bus, a control bus, and the like.
在本申请实施例中,处理器可以是通用处理器、数字信号处理器、专用集成电路、现场可编程门阵列或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。In this embodiment of the present application, the processor may be a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, which can implement or The methods, steps and logic block diagrams disclosed in the embodiments of this application are executed. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the methods disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
在本申请实施例中,存储器可以是非易失性存储器,比如硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD)等,还可以是易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM)。存储器是能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。本申请实施例中的存储器还可以是电路或者其它任意能够实现存储功能的装置,用于存储程序指令和/或数据。In this embodiment of the present application, the memory may be a non-volatile memory, such as a hard disk drive (HDD) or a solid-state drive (SSD), etc., or may also be a volatile memory (volatile memory), for example Random-access memory (RAM). Memory is, but is not limited to, any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory in this embodiment of the present application may also be a circuit or any other device capable of implementing a storage function, for storing program instructions and/or data.
以上实施例中所用,根据上下文,术语“当…时”或“当…后”可以被解释为意思是“如果…”或“在…后”或“响应于确定…”或“响应于检测到…”。类似地,根据上下文,短语“在确定…时”或“如果检测到(所陈述的条件或事件)”可以被解释为意思是“如果确定…”或“响应于确定…”或“在检测到(所陈述的条件或事件)时”或“响应于检测到(所陈述的条件或事件)”。As used in the above embodiments, depending on the context, the terms "when" or "after" can be interpreted to mean "if" or "after" or "in response to determining..." or "in response to detecting …”. Similarly, depending on the context, the phrases "in determining..." or "if detecting (the stated condition or event)" can be interpreted to mean "if determining..." or "in response to determining..." or "on detecting (the stated condition or event)" or "in response to the detection of (the stated condition or event)".
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。在不冲突的情况下,以上各实施例的方案都可以组合使用。In the above-mentioned embodiments, it may be implemented in whole or in part by software, hardware, firmware or any combination thereof. When implemented in software, it can be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present invention are generated. The computer may be a general purpose computer, special purpose computer, computer network, or other programmable device. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server, or data center Transmission to another website site, computer, server, or data center is by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.). The computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes an integration of one or more available media. The usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), and the like. In the case of no conflict, the solutions of the above embodiments can be used in combination.
需要指出的是,本专利申请文件的一部分包含受著作权保护的内容。除了对专利局的 专利文件或记录的专利文档内容制作副本以外,著作权人保留著作权。It should be noted that a part of this patent application file contains content protected by copyright. Except for making copies of the patent file or the contents of the patent file recorded by the Patent Office, the copyright owner reserves the right to copyright.

Claims (17)

  1. 一种播放方法,应用于电子设备,其特征在于,所述方法包括:A playback method, applied to an electronic device, characterized in that the method comprises:
    所述电子设备检测到针对第一媒体文件的第一用户输入;the electronic device detects a first user input for a first media file;
    响应于所述第一用户输入,所述电子设备播放所述第一媒体文件;In response to the first user input, the electronic device plays the first media file;
    在所述第一媒体文件播放至第一位置时,所述电子设备播放所述第二媒体文件,并且所述电子设备继续播放所述第一媒体文件;其中,所述第一位置为预先设置的;When the first media file is played to a first position, the electronic device plays the second media file, and the electronic device continues to play the first media file; wherein the first position is preset of;
    或者,or,
    所述电子设备检测到针对第一控件的第一用户输入;the electronic device detects a first user input for a first control;
    响应于所述第一用户输入,所述电子设备播放第一媒体文件;In response to the first user input, the electronic device plays a first media file;
    在所述第一媒体文件播放至第一位置时,所述电子设备播放所述第二媒体文件,并且所述电子设备继续播放所述第一媒体文件;其中,所述第一位置为预先设置的。When the first media file is played to a first position, the electronic device plays the second media file, and the electronic device continues to play the first media file; wherein the first position is preset of.
  2. 根据权利要求1所述的方法,其特征在于,The method of claim 1, wherein:
    所述电子设备包括显示屏;所述第一媒体文件和所述第二媒体文件在所述显示屏上播放;所述第一媒体文件和所述第二媒体文件都为视频文件;The electronic device includes a display screen; the first media file and the second media file are played on the display screen; both the first media file and the second media file are video files;
    或者,or,
    所述电子设备包括扬声器;所述第一媒体文件和所述第二媒体文件在所述扬声器上播放;所述第一媒体文件和所述第二媒体文件都为音频文件;The electronic device includes a speaker; the first media file and the second media file are played on the speaker; both the first media file and the second media file are audio files;
    或者,or,
    所述电子设备与耳机连接;所述第一媒体文件和所述第二媒体文件在所述耳机上播放;所述第一媒体文件和所述第二媒体文件都为音频文件;The electronic device is connected to a headset; the first media file and the second media file are played on the headset; both the first media file and the second media file are audio files;
    或者,or,
    所述电子设备包括显示屏和扬声器,所述第一媒体文件在所述显示屏上播放;所述第一媒体为视频文件;所述第二媒体文件在所述扬声器上播放;所述第二媒体文件为音频文件;The electronic device includes a display screen and a speaker, and the first media file is played on the display screen; the first media is a video file; the second media file is played on the speaker; the second media file is played on the speaker; The media file is an audio file;
    或者,or,
    所述电子设备包括显示屏,且所述电子设备与耳机连接,所述第一媒体文件在所述显示屏上播放;所述第一媒体文件为视频文件;所述第二媒体文件在所述耳机上播放;所述第二媒体文件为音频文件;The electronic device includes a display screen, and the electronic device is connected to an earphone, and the first media file is played on the display screen; the first media file is a video file; the second media file is displayed on the display screen. Play on the headset; the second media file is an audio file;
    或者,or,
    所述电子设备包括显示屏,且所述电子设备与耳机连接,所述第一媒体文件在所述耳机上播放;所述第一媒体文件为音频文件;所述第二媒体文件在所述显示屏上播放;所述第二媒体文件为视频文件;The electronic device includes a display screen, and the electronic device is connected to a headset, and the first media file is played on the headset; the first media file is an audio file; the second media file is displayed on the display Play on the screen; the second media file is a video file;
    或者,or,
    所述电子设备包括显示屏和扬声器,所述第一媒体文件在所述扬声器上播放;所述第一媒体文件为音频文件;所述第二媒体文件在所述显示屏上播放;所述第二媒体文件为视频文件。The electronic device includes a display screen and a speaker, and the first media file is played on the speaker; the first media file is an audio file; the second media file is played on the display screen; the first media file is played on the display screen; The second media file is a video file.
  3. 根据权利要求2所述的方法,其特征在于,在所述第一媒体文件播放至第一位置时,所述电子设备播放所述第二媒体文件,并且所述电子设备继续播放所述第一媒体文件;包括:The method according to claim 2, wherein when the first media file is played to a first position, the electronic device plays the second media file, and the electronic device continues to play the first media file. Media files; including:
    在所述第一媒体文件播放至第一位置时,所述电子设备的显示屏分屏为第一屏和第二屏;When the first media file is played to the first position, the display screen of the electronic device is divided into a first screen and a second screen;
    所述第二屏播放所述第二媒体文件,并且所述第一屏继续播放所述第一媒体文件;其中,所述第一媒体文件和所述第二媒体文件都为视频文件。The second screen plays the second media file, and the first screen continues to play the first media file; wherein both the first media file and the second media file are video files.
  4. 根据权利要求2所述的方法,其特征在于,在所述第一媒体文件播放至第一位置时,所述电子设备播放所述第二媒体文件,并且所述电子设备继续播放所述第一媒体文件;包括:The method according to claim 2, wherein when the first media file is played to a first position, the electronic device plays the second media file, and the electronic device continues to play the first media file. Media files; including:
    在所述第一媒体文件播放至第一位置时,所述电子设备的显示屏显示一个窗口,所述窗口显示所述第二媒体文件对应的第二画面,且所述窗口悬浮在所述第一媒体文件对应的第一画面之上;其中,所述第一媒体文件和所述第二媒体文件都为视频文件。When the first media file is played to the first position, the display screen of the electronic device displays a window, the window displays the second screen corresponding to the second media file, and the window is suspended in the first position. On the first picture corresponding to a media file; wherein, both the first media file and the second media file are video files.
  5. 根据权利要求1-4中任意一项所述的方法,其特征在于,所述第一位置包括以下的一种:所述第一媒体文件中预设的播放时间点、所述第一媒体文件中预设的播放帧、所述第一媒体文件预设的播放比例。The method according to any one of claims 1-4, wherein the first position comprises one of the following: a preset playback time point in the first media file, the first media file The preset playback frame in , and the preset playback ratio of the first media file.
  6. 根据权利要求1-5中任意一项所述的方法,其特征在于,所述第一位置为用户预先设置的。The method according to any one of claims 1-5, wherein the first position is preset by a user.
  7. 根据权利要求1-6中任意一项所述的方法,其特征在于,所述第一媒体文件在播放时,关联第一媒体文件扩展信息;所述第一媒体文件扩展信息包括第一媒体文件标识;所述第一媒体文件标识用于标识所述第一媒体文件;所述第一媒体文件扩展信息关联第二媒体文件扩展信息;所述第二媒体文件扩展信息包括第二媒体文件标识和第二媒体文件播放位置标识;所述第二媒体文件标识用于标识所述第二媒体文件;所述第二媒体文件播放位置标识用于标识所述第二媒体开始播放时所述第一媒体的第一播放位置信息;所述第一播放位置信息包括所述第一位置。The method according to any one of claims 1-6, wherein when the first media file is being played, extension information of the first media file is associated with the first media file; and the extension information of the first media file includes the first media file identifier; the first media file identifier is used to identify the first media file; the first media file extension information is associated with the second media file extension information; the second media file extension information includes the second media file identifier and The second media file playback position identifier; the second media file identifier is used to identify the second media file; the second media file playback position identifier is used to identify the first media when the second media starts to play the first playback position information; the first playback position information includes the first position.
  8. 根据权利要求1-7中任意一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1-7, wherein the method further comprises:
    在所述第二媒体文件播放完毕后,所述电子设备继续播放所述第一媒体文件;且所述电子设备只播放所述第一媒体文件。After the second media file is played, the electronic device continues to play the first media file; and the electronic device only plays the first media file.
  9. 根据权利要求8所述的方法,其特征在于,所述方法还包括:The method according to claim 8, wherein the method further comprises:
    在所述第一媒体文件播放至第二位置时,所述电子设备播放所述第三媒体文件,并且所述电子设备继续播放所述第一媒体文件;其中,所述第二位置为预先设置的;所述第二位置位于所述第二媒体文件播放完毕后。When the first media file is played to the second position, the electronic device plays the third media file, and the electronic device continues to play the first media file; wherein the second position is preset ; the second position is located after the second media file is played.
  10. 根据权利要求9所述的方法,其特征在于,所述第二位置包括以下的一种:所述第一媒体文件中预设的播放时间点、所述第一媒体文件中预设的播放帧、所述第一媒体文件预设的播放比例。The method according to claim 9, wherein the second position comprises one of the following: a preset playback time point in the first media file, a preset playback frame in the first media file , the preset playback ratio of the first media file.
  11. 根据权利要求9或10所述的方法,其特征在于,所述第一媒体文件扩展信息还关联第三媒体文件扩展信息;所述第三媒体文件扩展信息包括第三媒体文件标识和第三媒体文件播放位置标识;所述第三媒体文件标识用于标识所述第三媒体文件;所述第三媒体文件播放位置标识用于标识所述第三媒体开始播放时所述第一媒体的第二播放位置信息;所述第二播放位置信息包括所述第二位置。The method according to claim 9 or 10, wherein the first media file extension information is further associated with third media file extension information; the third media file extension information includes a third media file identifier and a third media file A file playback position identifier; the third media file identifier is used to identify the third media file; the third media file playback position identifier is used to identify the second position of the first media when the third media starts to play. Playing position information; the second playing position information includes the second position.
  12. 根据权利要求11所述的方法,其特征在于,所述第一媒体文件扩展信息为与所述第一多媒体多路信息mmmw,所述第二媒体文件扩展信息为第二mmmw,所述第三媒体文件扩展信息为第三mmmw,所述第一mmmw、与所述第一mmmw对应的stsd、stts、stsc、 stsz、stss、stco位于第一媒体流盒子stb1中,所述第二mmmw、与所述第二mmmw对应的stsd、stts、stsc、stsz、stss、stco位于第二媒体流盒子stb1中,所述第三mmmw、与所述第三mmmw对应的stsd、stts、stsc、stsz、stss、stco位于第三媒体流盒子stb1中,所述第一媒体流盒子stb1位于所述第一媒体盒子media中,所述第二媒体流盒子stb1位于所述第二媒体盒子media中,所述第三媒体流盒子stb1位于所述第三媒体盒子media中,所述第一媒体盒子media和与所述第一媒体盒子media对应的头tkhd位于第一流trak中,所述第二媒体盒子media和与所述第二媒体盒子media对应的头tkhd位于第二流trak中,所述第三媒体盒子media和与所述第三媒体盒子media对应的头tkhd位于第三流trak中,所述第一流trak、所述第二流trak、所述第三流trak、与所述第一流trak、所述第二流trak与所述第三流trak对应的头mvhd位于视频盒子moov中,所述视频盒子moov、媒体数据mdata和媒体类型ftyp位于MP4文件中,所述第一流trak用于指示所述第一媒体文件,所述第二流trak用于指示所述第二媒体文件,所述第三流trak用于指示所述第三媒体文件,所述第一媒体文件、所述第二媒体文件和所述第三媒体文件均为视频文件。The method according to claim 11, wherein the first media file extension information is mmmw with the first multimedia multiplexing information, the second media file extension information is the second mmmw, and the The third media file extension information is the third mmmw, the first mmmw, stsd, stts, stsc, stsz, stss, and stco corresponding to the first mmmw are located in the first media stream box stb1, and the second mmmw , the stsd, stts, stsc, stsz, stss, stco corresponding to the second mmmw are located in the second media stream box stb1, the third mmmw, the stsd, stts, stsc, stsz corresponding to the third mmmw , stss and stco are located in the third media streaming box stb1, the first media streaming box stb1 is located in the first media box media, the second media streaming box stb1 is located in the second media box media, so The third media stream box stb1 is located in the third media box media, the first media box media and the header tkhd corresponding to the first media box media are located in the first stream trak, the second media box media and the header tkhd corresponding to the second media box media is located in the second stream trak, the third media box media and the header tkhd corresponding to the third media box media are located in the third stream trak, and the third media box media is located in the third stream trak. The first stream trak, the second stream trak, the third stream trak, the header mvhd corresponding to the first stream trak, the second stream trak and the third stream trak are located in the video box moov, the video The box moov, media data mdata and media type ftyp are located in the MP4 file, the first stream trak is used to indicate the first media file, the second stream trak is used to indicate the second media file, and the third stream trak is used to indicate the second media file. The stream trak is used to indicate the third media file, and the first media file, the second media file and the third media file are all video files.
  13. 一种视频文件的获取方法,应用于电子设备,所述电子设备包括显示屏;其特征在于,所述方法包括:A method for acquiring a video file, applied to an electronic device, the electronic device comprising a display screen; characterized in that, the method comprises:
    所述显示屏显示第一应用的第一界面;所述第一界面包括主视频文件设置控件、主视频文件预览框、第一从视频文件设置控件、第一从视频文件预览框、第一关联设置控件和完成控件;The display screen displays a first interface of the first application; the first interface includes a master video file setting control, a master video file preview frame, a first slave video file setting control, a first slave video file preview frame, a first association set controls and finish controls;
    响应于对所述主视频文件设置控件的第一用户输入,所述显示屏显示所述第一应用的第二界面;所述第二界面包括第一视频文件和第二视频文件;in response to a first user input to the main video file setting control, the display screen displays a second interface of the first application; the second interface includes a first video file and a second video file;
    响应于对所述第一视频文件的第二用户输入,所述主视频文件预览框显示所述第一视频文件的预览静态画面或预览动态画面;In response to a second user input to the first video file, the main video file preview box displays a preview static picture or a preview dynamic picture of the first video file;
    响应于对所述第一从视频文件设置控件的第三用户输入,所述电子设备显示所述第二界面;The electronic device displays the second interface in response to a third user input to the first slave video file settings control;
    响应于对所述第二媒体文件的第四用户输入,所述第一从视频文件预览框显示所述第二视频文件的预览静态画面或预览动态画面;In response to a fourth user input to the second media file, the first slave video file preview frame displays a preview static picture or a preview dynamic picture of the second video file;
    响应于对所述第一关联设置控件的第五用户输入,所述显示屏显示一个关联设置框;所述关联设置框用于设置所述主视频文件在播放至第一位置时,所述第一从视频文件开始播放,并且所述第一从视频文件的播放不会暂停或停止所述主视频文件的播放;所述关联设置框包括第一位置输入框、第一确认控件和第二确认控件;In response to a fifth user input to the first association setting control, the display screen displays an association setting frame; the association setting frame is used to set the first association setting frame when the main video file is played to the first position, the first association setting frame A slave video file starts playing, and the playback of the first slave video file will not pause or stop the playback of the master video file; the association setting box includes a first position input box, a first confirmation control and a second confirmation control;
    在接收到对所述第一位置输入框的第六用户输入后,且在接收到对所述第一确认控件的第七用户输入后,所述第一位置设置完毕;After receiving the sixth user input to the first position input box, and after receiving the seventh user input to the first confirmation control, the first position is set;
    在接收到对所述完成控件的第八用户输入后,获取到第三视频文件。After receiving the eighth user input to the completion control, a third video file is acquired.
  14. 一种电子设备,其特征在于,包括:An electronic device, comprising:
    处理器;processor;
    存储器;memory;
    以及计算机程序,所述计算机程序存储在所述存储器中,当所述计算机程序被所述处理器执行时,使得所述电子设备执行如权利要求1-13中任意一项所述的方法。and a computer program, which is stored in the memory and, when executed by the processor, causes the electronic device to perform the method of any one of claims 1-13.
  15. 一种芯片,其特征在于,包括:A chip, characterized in that it includes:
    处理器;processor;
    存储器;memory;
    以及计算机程序,所述计算机程序存储在所述存储器中,当所述计算机程序被所述处理器执行时,使得所述芯片执行如权利要求1-13中任意一项所述的方法。and a computer program, which is stored in the memory and, when executed by the processor, causes the chip to perform the method of any one of claims 1-13.
  16. 一种计算机可读存储介质,其特征在于,包括计算机程序,当所述计算机程序在电子设备上运行时,使得所述电子设备执行如权利要求1-13中任意一项所述的方法。A computer-readable storage medium, characterized by comprising a computer program, which, when the computer program is executed on an electronic device, causes the electronic device to execute the method according to any one of claims 1-13.
  17. 一种计算机程序产品,其特征在于,当其在计算机上运行时,使得所述计算机执行如权利要求1-13中任意一项所述的方法。A computer program product which, when run on a computer, causes the computer to perform the method of any one of claims 1-13.
PCT/CN2021/140717 2020-12-31 2021-12-23 Media playing method and electronic device WO2022143374A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011634630.6A CN114697724A (en) 2020-12-31 2020-12-31 Media playing method and electronic equipment
CN202011634630.6 2020-12-31

Publications (1)

Publication Number Publication Date
WO2022143374A1 true WO2022143374A1 (en) 2022-07-07

Family

ID=82134215

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/140717 WO2022143374A1 (en) 2020-12-31 2021-12-23 Media playing method and electronic device

Country Status (2)

Country Link
CN (1) CN114697724A (en)
WO (1) WO2022143374A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116033204A (en) * 2022-07-08 2023-04-28 荣耀终端有限公司 Screen recording method, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1826572A (en) * 2003-06-02 2006-08-30 迪斯尼实业公司 System and method of programmatic window control for consumer video players
US7120924B1 (en) * 2000-02-29 2006-10-10 Goldpocket Interactive, Inc. Method and apparatus for receiving a hyperlinked television broadcast
CN101557464A (en) * 2009-04-01 2009-10-14 深圳市融创天下科技发展有限公司 Method for dynamically embedding other media segments in video program playback
CN104853223A (en) * 2015-04-29 2015-08-19 小米科技有限责任公司 Video stream intercutting method and terminal equipment
CN105872695A (en) * 2015-12-31 2016-08-17 乐视网信息技术(北京)股份有限公司 Video playing method and device
CN110996157A (en) * 2019-12-20 2020-04-10 上海众源网络有限公司 Video playing method and device, electronic equipment and machine-readable storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101778282B (en) * 2010-01-12 2011-10-26 北京暴风网际科技有限公司 Method for concurrently playing different media files
CN105554550B (en) * 2015-12-08 2018-12-04 腾讯科技(北京)有限公司 Video broadcasting method and device
CN105898473A (en) * 2015-12-15 2016-08-24 乐视网信息技术(北京)股份有限公司 Multimedia resource play method and device and mobile equipment based on Android platform
CN106603947A (en) * 2016-12-28 2017-04-26 深圳Tcl数字技术有限公司 Method and device for controlling sound playing of TV set
CN107682713B (en) * 2017-04-11 2020-11-03 腾讯科技(北京)有限公司 Media file playing method and device
CN108833787B (en) * 2018-07-19 2021-06-01 百度在线网络技术(北京)有限公司 Method and apparatus for generating short video
CN109240638A (en) * 2018-08-29 2019-01-18 北京轩辕联科技有限公司 Audio-frequency processing method and device for vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7120924B1 (en) * 2000-02-29 2006-10-10 Goldpocket Interactive, Inc. Method and apparatus for receiving a hyperlinked television broadcast
CN1826572A (en) * 2003-06-02 2006-08-30 迪斯尼实业公司 System and method of programmatic window control for consumer video players
CN101557464A (en) * 2009-04-01 2009-10-14 深圳市融创天下科技发展有限公司 Method for dynamically embedding other media segments in video program playback
CN104853223A (en) * 2015-04-29 2015-08-19 小米科技有限责任公司 Video stream intercutting method and terminal equipment
CN105872695A (en) * 2015-12-31 2016-08-17 乐视网信息技术(北京)股份有限公司 Video playing method and device
CN110996157A (en) * 2019-12-20 2020-04-10 上海众源网络有限公司 Video playing method and device, electronic equipment and machine-readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116033204A (en) * 2022-07-08 2023-04-28 荣耀终端有限公司 Screen recording method, electronic equipment and storage medium
CN116033204B (en) * 2022-07-08 2023-10-20 荣耀终端有限公司 Screen recording method, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114697724A (en) 2022-07-01

Similar Documents

Publication Publication Date Title
US10536664B2 (en) Audio routing for audio-video recording
US20220159349A1 (en) Methods and apparatus for presenting advertisements during playback of recorded television content
US9110518B2 (en) System and method in a television system for responding to user-selection of an object in a television program utilizing an alternative communication network
JP2004288197A (en) Interface for presenting data expression in screen area inset
JP2016538657A (en) Browse videos by searching for multiple user comments and overlaying content
US10009643B2 (en) Apparatus and method for processing media content
US20110041060A1 (en) Video/Music User Interface
WO2008048268A1 (en) Method, apparatus and system for generating regions of interest in video content
US11570415B2 (en) Methods, systems, and media for generating a summarized video using frame rate modification
US11540024B2 (en) Method and system for precise presentation of audiovisual content with temporary closed captions
CN111669645B (en) Video playing method and device, electronic equipment and storage medium
CN104023261A (en) Digital media playing system
US20240107087A1 (en) Server, terminal and non-transitory computer-readable medium
JP6809463B2 (en) Information processing equipment, information processing methods, and programs
US20240146863A1 (en) Information processing device, information processing program, and recording medium
WO2022143374A1 (en) Media playing method and electronic device
CN113992926B (en) Interface display method, device, electronic equipment and storage medium
US12041280B2 (en) Methods, systems, and media for providing dynamic media sessions with video stream transfer features
CN113891108A (en) Subtitle optimization method and device, electronic equipment and storage medium
US20090328102A1 (en) Representative Scene Images
CN114449316B (en) Video processing method and device, electronic equipment and storage medium
US20180007450A1 (en) Video thumbnail in electronic program guide
CN115086691A (en) Subtitle optimization method and device, electronic equipment and storage medium
WO2002062062A1 (en) Method and arrangement for creation of a still shot video sequence, via an apparatus, and transmission of the sequence to a mobile communication device for utilization
WO2015130446A1 (en) Media asset annotation for second-screen

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21914111

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21914111

Country of ref document: EP

Kind code of ref document: A1