CN113259770B - Video playing method, device, electronic equipment, medium and product - Google Patents

Video playing method, device, electronic equipment, medium and product Download PDF

Info

Publication number
CN113259770B
CN113259770B CN202110512119.7A CN202110512119A CN113259770B CN 113259770 B CN113259770 B CN 113259770B CN 202110512119 A CN202110512119 A CN 202110512119A CN 113259770 B CN113259770 B CN 113259770B
Authority
CN
China
Prior art keywords
video
playing
underflow
image
main stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110512119.7A
Other languages
Chinese (zh)
Other versions
CN113259770A (en
Inventor
娄志云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN202110512119.7A priority Critical patent/CN113259770B/en
Publication of CN113259770A publication Critical patent/CN113259770A/en
Application granted granted Critical
Publication of CN113259770B publication Critical patent/CN113259770B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks

Abstract

The application provides a video playing method, a video playing device, an electronic device, a medium and a product, wherein in the method, a first main stream video is played, and the first main stream video corresponds to a first playing visual angle; obtaining an underflow video; the image of the ith frame in the underflow video is spliced based on the images of the ith frame in the plurality of main stream videos. And under the condition that the first playing visual angle is determined to be switched to the target playing visual angle, obtaining a transition video corresponding to the target playing visual angle from the obtained underflow video and playing the transition video. In the process of playing the transition video, a second main stream video is obtained, the second main stream video corresponds to a target playing visual angle, and a first frame image contained in the second main stream video is a next frame image of a last frame image of the transition video; namely, the time for playing the transition video is the loading time reserved for obtaining the second main stream video, and the pause phenomenon can not occur due to the playing of the transition video.

Description

Video playing method, device, electronic equipment, medium and product
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a video playing method, apparatus, electronic device, medium, and product.
Background
The free-view playing refers to that a user can watch a shot object from a plurality of playing views through the electronic equipment. In order to implement free view angle play, a certain number of image acquisition devices need to be deployed for an object to be photographed, for example, a plurality of cameras are deployed in a match field, where the acquisition view angles of different cameras are different, and because the acquisition view angles of different cameras are different, the play view angles corresponding to videos acquired by different cameras are different, and the plurality of cameras respectively acquire videos of the match field to obtain videos at multiple play view angles, so that an electronic device can obtain videos corresponding to multiple play view angles from a server, and a user can view videos of the match field from multiple play view angles.
In the process of switching the playing visual angles by a user, switching between videos corresponding to a plurality of playing visual angles is involved, in the process of switching the playing visual angles, the electronic equipment needs to continuously load the videos of the corresponding playing visual angles from the server, and if the speed of loading the videos from the server by the electronic equipment is low, the situation that the videos of the corresponding playing visual angles cannot be played in time by the electronic equipment in the visual angle switching process may occur, namely the phenomenon of blocking occurs.
Disclosure of Invention
In view of the above, the present application provides a video playing method, apparatus, electronic device, medium and product. The technical problem that in the process of switching the playing visual angles, the video of the corresponding playing visual angles cannot be obtained from the server in time, so that the phenomenon of blocking occurs, and the switching effect is poor is solved.
In order to achieve the above purpose, the present application provides the following technical solutions:
according to a first aspect of the embodiments of the present disclosure, there is provided a video playing method, including:
playing a first mainstream video, wherein the first mainstream video is any one of a plurality of mainstream videos, the plurality of mainstream videos are videos of the same object at different playing visual angles, and the first mainstream video corresponds to a first playing visual angle;
obtaining an underflow video, wherein an ith frame image in the underflow video is obtained by splicing the ith frame images in the plurality of main stream videos, and the ith frame image in the underflow video is any frame image in the underflow video;
under the condition that the first playing visual angle is switched to the target playing visual angle, respectively intercepting images under the target playing visual angle from continuous multi-frame images contained in the underflow video to obtain a transition video; the minimum playing time of the playing times respectively corresponding to the continuous multi-frame images in the underflow video is equal to or later than a target playing time, and the target playing time is equal to or later than the playing time corresponding to the image displayed under the condition that the first playing visual angle is determined to be switched to the target playing visual angle;
acquiring a second main stream video, wherein the second main stream video corresponds to the target playing view angle, and a first frame image contained in the second main stream video is a next frame image of a last frame image of the transition video;
playing the transition video;
and after the transition video is played, playing the second main stream video.
According to a second aspect of the embodiments of the present disclosure, there is provided a video playback apparatus including:
the first playing module is used for playing a first mainstream video, wherein the first mainstream video is any one of a plurality of mainstream videos, the plurality of mainstream videos are videos of the same object at different playing visual angles, and the first mainstream video corresponds to a first playing visual angle;
the first acquisition module is used for acquiring an underflow video, wherein the ith frame image in the underflow video is obtained by splicing based on the ith frame images in the multiple mainstream videos, and the ith frame image in the underflow video is any frame image in the underflow video;
the second acquisition module is used for respectively intercepting images under a target playing visual angle from continuous multi-frame images contained in the underflow video under the condition that the first playing visual angle is determined to be switched to the target playing visual angle so as to obtain a transition video; the minimum playing time of the playing times respectively corresponding to the continuous multi-frame images in the underflow video is equal to or later than a target playing time, and the target playing time is equal to or later than the playing time corresponding to the image displayed under the condition that the first playing visual angle is determined to be switched to the target playing visual angle;
a third obtaining module, configured to obtain a second mainstream video, where the second mainstream video corresponds to the target playing view, and a first frame image included in the second mainstream video is a next frame image of a last frame image of the transition video;
the second playing module is used for playing the transition video;
and the third playing module is used for playing the second main stream video after the transitional video is played.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video playback method according to the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein when the instructions in the storage medium are executed by a processor of an electronic device, the electronic device is enabled to execute the video playing method according to the first aspect.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a computer program product, which can be directly loaded into an internal memory of a computer, where the memory is included in the electronic device according to the third aspect and contains software codes, and the computer program can be loaded into the computer and executed to implement the video playing method according to the first aspect.
According to the technical scheme, compared with the prior art, the video playing method provided by the application has the advantages that the first mainstream video is played, the first mainstream video is any one of the plurality of mainstream videos, the plurality of mainstream videos are videos of the same object at different playing view angles, and the first mainstream video corresponds to the first playing view angle; obtaining an underflow video; since the ith frame image in the underflow video is spliced based on the ith frame images in the multiple main stream videos, obtaining the underflow video is equivalent to obtaining the multiple main stream videos. And under the condition that the first playing visual angle is determined to be switched to the target playing visual angle, obtaining a transition video corresponding to the target playing visual angle from the obtained underflow video and playing the transition video. In the process of playing the transition video, a second main stream video is obtained, the second main stream video corresponds to a target playing view angle, and a first frame image contained in the second main stream video is a next frame image of a last frame image of the transition video; the time for playing the transition video is the loading time reserved for obtaining the second mainstream video, and the problem of pause caused by incapability of playing the second mainstream video corresponding to the target playing view angle in time can be avoided due to the fact that the transition video is played, so that the switching effect is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a diagram illustrating a hardware architecture according to an embodiment of the present application;
fig. 2 is a flowchart of a video playing method according to an embodiment of the present application;
fig. 3 is a schematic diagram of a process for generating an underflow video according to an embodiment of the present application;
fig. 4a to 4b are schematic diagrams illustrating a splicing manner of ith frame images included in a plurality of mainstream videos according to an embodiment of the present disclosure;
fig. 5 is a diagram illustrating a correspondence relationship between an image capturing device and a capturing view angle provided in the embodiment of the present application;
fig. 6 is a schematic diagram illustrating a process of acquiring a first transition video according to an embodiment of the present application;
fig. 7 is a structural diagram of a video playing device according to an embodiment of the present application;
fig. 8 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The embodiment of the application provides a video playing method, a video playing device, an electronic device and a computer readable storage medium, and before introducing the technical scheme provided by the embodiment of the application, the related technology and the hardware architecture related to the application are introduced.
First, the related art to which the present application relates will be described.
In the related art, the free view angle playing relates to mainstream videos corresponding to a plurality of playing view angles, when a user switches the playing view angles, switching between the mainstream videos corresponding to the plurality of playing view angles is involved, before the user performs playing view angle switching operation, a target playing view angle to which the user needs to switch cannot be determined, due to the uncertainty, the mainstream video corresponding to the target playing view angle cannot be loaded in advance, after the user performs playing view angle switching operation, although the target playing view angle to which the user needs to switch can be determined, due to the fact that the mainstream video corresponding to the target playing view angle is not loaded in advance, if a network is poor, the situation that the mainstream video corresponding to the target playing view angle is not loaded in time may occur, a jam phenomenon occurs, and the technical problem of poor switching effect is caused.
For example, when the first playing view angle is switched to the playing view angle a (the target playing view angle is the playing view angle a), the mainstream video corresponding to the playing view angle a needs to be loaded from the server, and when the playing view angle a is switched to the playing view angle B (the target playing view angle is the playing view angle B), the mainstream video corresponding to the playing view angle B needs to be loaded from the server.
Next, the hardware architecture to which the present application relates is introduced.
Fig. 1 is a schematic diagram of a hardware architecture according to an embodiment of the present application. The hardware architecture involves a plurality of image acquisition apparatuses 11, a base station 12, a server 13 and at least one electronic device 14.
For example, a plurality of image capturing devices 11 capture main stream videos from different capturing perspectives for the same object, and fig. 1 illustrates an example in which the object is a playing field.
For example, the object may be determined based on actual conditions, and the embodiment of the present application does not limit the type and number of the objects.
In the embodiment of the application, the video acquired by the image acquisition device is called a mainstream video.
For example, a plurality of image capturing devices 11 may be disposed at different positions of the same object, so that the capturing view angles of different image capturing devices 11 are different, and the playing view angles corresponding to videos captured by different image capturing devices 11 are different because the capturing view angles of different image capturing devices 11 are different.
For example, one or more image capturing devices 11 may be arranged on the plane of the object, such as a plurality of image capturing devices 11 arranged around the playing field as shown in fig. 1; for example, one or more image acquisition devices may be arranged on the top and/or bottom of the object.
It should be noted that fig. 1 shows 20 image capturing devices 11, 1 base station 12, 1 server 13, and 1 electronic device 14, fig. 1 is only an example, and in practical applications, the numbers of the image capturing devices 11, the base station 12, the server 13, and the electronic device 14 may be set based on practical situations, which is not limited in the embodiment of the present application.
Illustratively, the image acquisition device 11 may include one or more cameras.
The server 13 may be, for example, one server, a server cluster composed of a plurality of servers, or a cloud computing server center. The server 13 may include a processor, memory, and a network interface, among others.
For example, the electronic device 14 may be any electronic product capable of interacting with a user through one or more of a keyboard, a touch PAD, a touch screen, a remote controller, voice interaction, or a handwriting device, for example, a mobile phone, a notebook computer, a tablet computer, a palm computer, a personal computer, a wearable device, a smart television, a PAD, and the like.
It should be noted that fig. 1 is only an example, and the type of the electronic device may be various and is not limited to the personal computer in fig. 1.
In an alternative implementation, the hardware architecture may also relate to a switch, and the embodiment of the present application is not particularly limited.
Illustratively, each image capture device 11 is configured to capture a main stream video and upload the main stream video to the base station 12.
It can be understood that the main stream video collected by different image collection devices 11 has different collection view angles, and for example, the image collection device further transmits an identifier representing its own collection view angle to the base station 12, and for example, the collection view angle of the image collection device corresponds to the playing view angle of the main stream video collected by the image collection device; the playing visual angles of the main stream video corresponding to different collecting visual angles are different. Illustratively, the collection view angle of the image collection device is a playing view angle of the mainstream video collected by the image collection device.
For example, the identification may include at least one of position information of the image capturing device and a reference number of the image capturing device. Exemplary, reference numbers for the image capture device include, but are not limited to: at least one of an IP (Internet protocol Address) Address and a MAC (Media Access Control Address) Address of the image capturing apparatus. Illustratively, the identification of different image capturing devices is different.
Illustratively, the position information of the image capturing device represents a playing view angle corresponding to the mainstream video captured by the image capturing device.
The base station 12 transmits the main stream video acquired by each of the plurality of image acquisition devices 11 to the server 13.
Illustratively, the base station 12 sends the identifiers corresponding to the plurality of image capturing devices 11 to the server 13.
For example, the server 13 may obtain the underflow video based on the main stream videos respectively acquired by the plurality of image acquiring devices 11.
Illustratively, the method for acquiring the underflow video comprises the following steps: processing ith frame images contained in the multiple main stream videos respectively to obtain processing images corresponding to the ith frame images contained in the multiple main stream videos respectively; and splicing the processing images corresponding to the ith frame image contained in the plurality of main stream videos respectively to obtain the ith frame image in the underflow video so as to obtain the underflow video. The ith frame image is any frame image in the underflow video.
Illustratively, the above-mentioned "processing" refers to a processing operation of reducing the data amount of the ith frame image in the mainstream video. For example, the size of the ith frame image in the main stream video is reduced, and/or the resolution is reduced.
In summary, each frame of image in the underflow video is formed by splicing images of the same object under multiple play viewing angles.
For example, a plurality of image capturing devices 11 may capture videos of the same object at the same time, and then the number of frames of images included in different main stream videos is the same, and the images of different main stream videos at the same position are images captured at the same time and at different capturing view angles for the same object. The underflow video contains images whose number of frames is equal to that of any of the main stream videos.
Assuming that the number of the plurality of image capturing devices 11 is 20, each frame image in the underflow video is formed by stitching processed images obtained by processing the images captured by the 20 image capturing devices 11.
For example, the data amount of the underflow video may be greater than or equal to the data amount of one main stream video, but the data amount of the underflow video is less than the sum of the data amounts of all main stream videos. Illustratively, the data amount of the underflow video is smaller than that of any one of the mainstream videos. Because the data volume of the underflow video is smaller than the sum of the data volumes of the plurality of main stream videos, the bandwidth resource is saved when the underflow video is loaded compared with the situation that all the main stream videos are loaded at the same time.
For example, the server 13 may send the main stream video acquired by each of the plurality of image acquisition devices 11 to another server, and the other server sends the underflow video to the server 13 after obtaining the underflow video based on the main stream video acquired by each of the plurality of image acquisition devices 11.
Illustratively, the user may operate the electronic device 14, for example, to control the electronic device 14 to play a video of a playing field as shown in FIG. 1.
For example, initially, the electronic device 14 may default to play the main stream video corresponding to the set first play view (hereinafter, this main stream video is referred to as the first main stream video). For example, initially, the electronic device 14 may display a play view scene diagram, as shown in the left diagram of fig. 1, and the user may select one play view (subsequently, the main stream video corresponding to the play view selected by the user is referred to as a first main stream video).
The electronic device 14 may load the first main stream video corresponding to the set first play view or the play view selected by the user from the server 13.
The electronic device 14 will obtain the underflow video during the process of playing the first main stream video.
Because each frame image of the underflow video is formed by splicing images under a plurality of playing visual angles, acquiring the underflow video is equivalent to acquiring all main stream videos, and if switching operation from the first playing visual angle to the target playing visual angle is detected subsequently, a transition video corresponding to the target playing visual angle can be directly acquired from the acquired underflow video. In the process of playing the transition video, the second main stream video corresponding to the target playing visual angle is obtained, so that buffering time is provided for obtaining the second main stream video, and after the transition video is played, the second main stream video is played, so that the pause phenomenon cannot occur. And because the data volume of the underflow video is less than the sum of the data volumes of all the main stream videos, the bandwidth resource is saved by comparing the obtained underflow video with the obtained main stream videos.
The embodiment of the application can be applied to a live application scene and can also be applied to a recorded broadcast application scene.
Those skilled in the art will appreciate that the above-described electronic devices and servers are merely exemplary, and that other electronic devices or servers, now known or later developed, may be included within the scope of the present application and are hereby incorporated by reference.
The following describes a video playing method with reference to the accompanying drawings.
As shown in fig. 2, which is a flowchart of a video playing method provided in this embodiment of the present application, the method may be applied to the electronic device 14 shown in fig. 1, and the method includes steps S201 to S206 in implementation.
Step S201: the first main stream video is played.
The first main stream video is any one of a plurality of main stream videos, the plurality of main stream videos are videos of the same object under different playing visual angles, and the first main stream video corresponds to a first playing visual angle.
Illustratively, the number of frames of images contained in different main stream videos is the same, and the images of different main stream videos at the same position are images of the same object at different playing view angles at the same time. I.e. the different main stream videos are synchronized.
Step S202: and obtaining an underflow video, wherein the ith frame image in the underflow video is obtained by splicing the ith frame images in the plurality of main stream videos.
The image of the ith frame in the underflow video is any image of the frames in the underflow video. I.e. the underflow video and the main stream video are synchronized.
Illustratively, the underflow video contains a number of frames of images equal to the number of frames of images contained in any of the main stream videos.
The following illustrates the relationship between the underflow video and the plurality of main stream videos.
Fig. 3 is a schematic diagram illustrating a generation process of the underflow video according to an embodiment of the present application.
Fig. 3 illustrates an example where the number of the plurality of image capturing devices 11 is 4, that is, the number of the plurality of main stream videos is 4. Assuming that the identifiers corresponding to the multiple mainstream videos are S1, S2, S3, and S4, respectively, then the server 13 may obtain the mainstream video corresponding to the identifier S1, the mainstream video corresponding to the identifier S2, the mainstream video corresponding to the identifier S3, and the mainstream video corresponding to the identifier S4. Assuming that each main stream video includes 8 frames of pictures, the positions of the 8 frames of pictures in the main stream video are, in order from morning to evening, picture P1, picture P2, picture P3, picture P4, picture P5, picture P6, picture P7, and picture P8.
In order to make the embodiment of the present application more understandable to those skilled in the art, the multi-frame images included in the same main stream image in fig. 3 are characterized by quadrangles filling the same pattern; different mainstream videos contain images characterized by quadrilaterals that fill in different patterns.
The mainstream video 31 in fig. 3 includes 8 frames of images, and each frame of image is formed by splicing images at corresponding positions in the 4 mainstream videos shown in fig. 3. As shown in fig. 3, the ith frame image of the underflow video is formed by splicing the processed image corresponding to the ith frame image in the main stream video corresponding to the identifier S1, the processed image corresponding to the ith frame image in the main stream video corresponding to the identifier S2, the processed image corresponding to the ith frame image in the main stream video corresponding to the identifier S3, and the processed image corresponding to the ith frame image in the main stream video corresponding to the identifier S4.
Illustratively, the processed image corresponding to the ith frame image in the main stream video refers to an image obtained by reducing the data amount (for example, reducing the size and/or resolution) of the ith frame image in the main stream video.
In fig. 3, a processed image of the image Pi in the mainstream video corresponding to the identifier Si included in the underflow video is represented by Si-Pi. In the example shown in fig. 3, i =1, 2, 3, 4.
For example, there are various ways of splicing the processed images corresponding to the ith frame image included in each of the plurality of main stream videos, and the embodiments of the present application provide, but are not limited to, the following two ways.
First, the processed images corresponding to the ith frame image included in each of the plurality of main stream videos are closely adjacent to each other, as shown in fig. 4 a.
Fig. 4a is an illustration of fig. 3, and fig. 4a shows two splicing modes in the first case. The graph formed by splicing the processed images corresponding to the ith frame image included in each of the plurality of main stream videos may be any, for example, a rectangle shown in fig. 4 a.
Second, a certain interval is provided between the processed images corresponding to the ith frame image included in each of the plurality of main stream videos, as shown in fig. 4 b.
Fig. 4b is an illustration of fig. 3, and fig. 4b shows two splicing methods in the second case. The graph formed by splicing the processed images corresponding to the ith frame image included in each of the plurality of main stream videos may be any, for example, a rectangle shown in fig. 4 a.
In the embodiment of the application, in the process of playing the first main stream video, the underflow video is obtained to prepare for switching the playing view angle. Because each frame image of the underflow video is formed by splicing images under a plurality of playing visual angles, acquiring the underflow video is equivalent to acquiring all main stream videos, and if switching operation from the first playing visual angle to the target playing visual angle is detected subsequently, a transition video corresponding to the target playing visual angle can be directly acquired from the acquired underflow video and played.
Step S203: and under the condition that the first playing visual angle is determined to be switched to the target playing visual angle, respectively intercepting images under the target playing visual angle from continuous multi-frame images contained in the underflow video to obtain a transition video.
And the minimum playing time of the playing times respectively corresponding to the continuous multi-frame images in the underflow video is equal to or later than a target playing time, and the target playing time is equal to or later than the playing time corresponding to the image displayed under the condition that the first playing visual angle is determined to be switched to the target playing visual angle.
Illustratively, in a case that it is determined that the first playing view is switched to the target playing view, the acquisition of the first mainstream video from the server 13 may be suspended, so as to save bandwidth resources.
Illustratively, the process of obtaining the target playing time in step S203 includes: and under the condition that the first playing visual angle is switched to the target playing visual angle, determining the target playing time.
In different application scenarios, the target playing time may be different. The following describes the target playing time with reference to the application scenario.
The user can execute one or more operations of switching the playing visual angle to obtain the target playing visual angle required to be watched by the user. In this scenario, the play visual angle to be switched corresponding to the last play visual angle switching operation is a target play visual angle, and the target play time is equal to or later than the play time corresponding to the last displayed target image.
The following describes the target playing time with reference to the process of acquiring the target playing angle, and the process of determining the target playing angle specifically includes the following steps B1 to B3.
Step B1: and if the command of switching the playing visual angle is detected, determining a second playing visual angle to be switched.
For example, the user may perform the operation of switching the playing angle of view by voice, or by touching a preset key, or by a preset gesture. For example, if the play angle switching operation performed by the user is detected, the play angle switching instruction is detected.
Illustratively, the second play perspective may be determined based on the switch play perspective instruction. For example, if the operation of switching the playing perspective to the voice operation is performed, the voice may carry information representing a second playing perspective, for example, "rotate clockwise," and the second playing perspective may be determined based on the layout positions of the plurality of image capturing devices.
And step B2: and displaying a target image, wherein the target image is an image corresponding to the second playing visual angle in an image at a first moment in the underflow video, and the first moment is equal to or later than the playing moment corresponding to the image of the first main stream video displayed when the playing visual angle switching instruction is detected.
Illustratively, the first main stream video is paused when the switch play view angle instruction is detected.
Illustratively, the first time is equal to or later than the playing time corresponding to the image of the first main stream video displayed when the instruction of switching the playing view angle is detected, and the method includes: the first time is the playing time corresponding to the image of the first main stream video displayed when the switching playing visual angle instruction is detected, or the first time is equal to or later than the playing time corresponding to the next frame image of the first main stream video displayed when the switching playing visual angle instruction is detected.
And step B3: and returning to the step B1.
The first time will be described with reference to the application scenario.
In the first application scenario, in the process of executing the switching of the playing view angle (i.e., in the process of executing steps B1 to B3), the playing progress of the video is not changed, that is, the first time is the same as the playing time corresponding to the image of the first main stream video displayed when the instruction of switching the playing view angle is detected.
Assuming that the first main stream video is the main stream video corresponding to the identifier S31 shown in fig. 3, when the instruction of switching the playing view angle is detected, the electronic device is playing the image P3 in the main stream video corresponding to the identifier S31, and then the first time may be the playing time corresponding to the image P3.
Illustratively, after the image P3 of the first main stream video has been played, the user has already seen the image P3, and if the playing time corresponding to the target image is the same as the playing time corresponding to the image P3, for the user, after the user has viewed the image P3 and the target image, the user is equivalent to viewing images at different playing angles at the same time. I.e. the playing progress of the video is not changed.
In the second application scenario, in the process of executing the switching of the playing view angle (i.e., in the process of executing steps B1 to B3), the playing progress of the video is constantly changing, in the process of executing steps B1 to B2 for the first time, the first time may be a playing time corresponding to a next frame image of an image of the first main stream video that is displayed when the instruction of switching the playing view angle is detected, and in the process of executing steps B1 to B2 for the nth time, the first time is a playing time corresponding to a next frame image of the target image that is displayed in the process of executing steps B1 and B2 for the N-1 st time. N is any integer greater than or equal to 2.
For example, assuming that the first main stream video is the main stream video corresponding to the identifier S31 shown in fig. 3, when a play angle switching instruction is detected for the first time, the electronic device is playing the image P3 in the main stream video corresponding to the identifier S31, in the process of executing steps B1 to B2 for the first time, the first time may be a playing time corresponding to a next frame image P4 of the image P3, in the process of executing steps B1 to B2 for the second time, the first time may be a playing time corresponding to a next frame image P5 of the image P4, and in the process of executing steps B1 to B2 for the third time, the first time may be a playing time corresponding to a next frame image P6 of the image P5. The playing progress of the video is constantly changed in the process of executing the switching of the playing visual angle.
The following describes steps B1 to B3 with reference to examples.
As shown in the corresponding relationship diagram between the image capturing device and the playing view angle shown in fig. 5, fig. 5 is based on fig. 3, assuming that the playing view angle of the first mainstream video is the playing view angle a and the target playing view angle is the playing view angle B, and assuming that the playing view angle a is switched to the playing view angle B and the playing view angle C and the playing view angle D need to pass through, the user is required to perform the operation of switching the playing view angle 3 times.
If the playing view angle of the first main stream video is the playing view angle a and the target playing view angle is the playing view angle C, the user is required to perform 1 play view angle switching operation.
And step B4: and if the instruction of quitting the view angle switching is detected, determining the currently determined second playing view angle as the target playing view angle.
For example, if it is detected that the time difference between two adjacent switching playing view angle instructions is smaller than the preset time, it is considered that the user does not quit the switching playing view angle operation, that is, it is considered that the user does not find the target playing view angle required to be watched, and if the switching playing view angle instruction is not received after exceeding the preset time, it is considered that the quitting switching playing view angle instruction is detected, and it is considered that the user has found the target playing view angle required to be watched.
For example, if the operation of exiting the switching of the playing view angle is detected, it is considered that the instruction of exiting the switching of the playing view angle is detected. The user is considered to find the target playing view angle which the user needs to watch. For example, the operation of exiting from the switching of the playing view angle may be performed by voice, or by touching a preset key, or by a preset gesture.
For example, in a first application scenario, if a first time is a playing time corresponding to an image of the first mainstream video that is displayed when the instruction for switching the playing view is detected, a target playing time is a playing time corresponding to an image of the first mainstream video that is displayed when the instruction for switching the playing view is detected. At this time, "the image displayed" in "the play timing corresponding to the image displayed in the case where it is determined that the play angle is switched from the first play angle to the target play angle" mentioned in step S203 is "the image of the first main stream video displayed".
For example, in the second application scenario, the target playing time is a playing time corresponding to a next frame image of the currently displayed target image. At this time, the "image displayed in" the play time corresponding to the image displayed in the case where it is determined that the first play perspective is switched to the target play perspective "mentioned in step S203 is the" target image currently displayed ".
Step S204: and acquiring a second main stream video, wherein the second main stream video corresponds to the target playing visual angle, and a first frame image contained in the second main stream video is a next frame image of a last frame image of the transition video.
Illustratively, the second mainstream video is a local video stream in the mainstream video acquired by the image acquisition device corresponding to the target playing view.
It can be understood that the mainstream videos acquired by different image acquisition devices are synchronous, that is, the ith frame image in a plurality of mainstream videos is an image acquired by different image acquisition devices for the same object at the same time. Since the local video stream a of the first main stream video is played before, there is no need to play the local video stream B with the same playing time as the local video stream a in the main stream video acquired by the image acquisition device corresponding to the target playing view, so that in the process of acquiring the main stream video acquired by the image acquisition device corresponding to the target playing view, there is no need to acquire all main stream videos, and a local video stream, that is, a second main stream video, can be acquired.
Exemplarily, if the playing time corresponding to the first frame image included in the second main stream video is the same as the playing time corresponding to the last frame image of the transition video, when the user views the last frame image of the underflow video and the first frame image of the second main stream video, the user may view images at the same time and at different playing viewing angles, and the images are repeatedly played for the user, so that the user experience is reduced, and in order to avoid the problem of repeated playing, the first frame image included in the second main stream video is the next frame image of the last frame image of the transition video, so that the user experience is improved.
For example, in the case that it is determined that the first play view is switched to the target play view, the second main stream video may be acquired. I.e. during the acquisition of the transition video, the acquisition of the second main stream video can be started. Namely, the purpose of preloading the main stream video is realized.
Step S205: and playing the transition video.
Illustratively, step S204 and step S205 may be performed simultaneously; step S204 has already started to be executed before step S205, and step S204 is always executed in the course of executing step S205.
Illustratively, step S204 may be performed after step S205.
The transition video is played to delay the playing of the second main stream video, and the time is reserved for obtaining the second main stream video, so that the problem of pause cannot occur.
Step S206: and after the transition video is played, playing the second main stream video.
In the video playing method provided by the embodiment of the application, a first main stream video is played, the first main stream video is any one of a plurality of main stream videos, the plurality of main stream videos are videos of a same object under different playing visual angles, and the first main stream video corresponds to a first playing visual angle; obtaining an underflow video; since the ith frame image in the underflow video is spliced based on the ith frame images in the multiple main stream videos, obtaining the underflow video is equivalent to obtaining the multiple main stream videos. And under the condition that the first playing visual angle is determined to be switched to the target playing visual angle, obtaining a transition video corresponding to the target playing visual angle from the obtained underflow video and playing the transition video. In the process of playing a transition video, a second main stream video is continuously obtained, the second main stream video corresponds to the target playing view angle, and a first frame image contained in the second main stream video is a next frame image of a last frame image of the transition video; the time for playing the transition video is the loading time reserved for the electronic equipment to acquire the second main stream video, and the transition video is played, so that the problem of blocking caused by the fact that the second main stream video corresponding to the target playing view angle cannot be played in time is solved, and the switching effect is improved.
In an alternative implementation manner, there are various implementation manners of the step B1, and the embodiments of the present application provide, but are not limited to, the following manners. The implementation of step B1 includes step B11 or step B12.
Step B11: and if a play visual angle switching instruction is detected firstly, obtaining a second play visual angle adjacent to the first play visual angle in the visual angle switching direction from the plurality of play visual angles.
Illustratively, the instruction to switch the play angle of view includes an angle of view switching direction.
The second playing visual angle adjacent to the first playing visual angle refers to the playing visual angle corresponding to the image acquisition device adjacent to the image acquisition device corresponding to the first playing visual angle.
A second play view adjacent to the first play view in the view switching direction is exemplified below. Assuming that the first playing angle is the playing angle a in fig. 5, and assuming that the angle switching direction is the direction indicated by the curved arrow shown in fig. 5, the second playing angle is the playing angle C.
Illustratively, the viewing angle switching direction may be a direction set by a user, or may be a default direction of the electronic device.
Illustratively, the viewing angle switching direction is parallel to any one of the layout directions of the plurality of image capturing devices, and illustratively, the layout direction of the plurality of image capturing devices refers to a connecting line direction of the plurality of image capturing devices.
Step B12: and if the nth-time detection of the switching playing visual angle instruction, acquiring a second playing visual angle which is adjacent to the playing visual angle determined when the nth-1-time detection of the switching playing visual angle instruction in the visual angle switching direction from the plurality of playing visual angles.
N is any integer greater than or equal to 2.
For example, if N =2, the playing angle determined when the instruction for switching the playing angle is detected for the first time is the playing angle C in fig. 5, and if the angle switching direction is the direction indicated by the curved arrow shown in fig. 5, the second playing angle adjacent to the playing angle C determined in step B12 is the playing angle D.
In an alternative implementation, the execution process of step S203 includes steps C1 to C4.
Step C1: and obtaining the continuous multiframe images which are positioned in the underflow video and are later than or equal to the target playing time.
In an optional implementation manner, all images of which the corresponding play time is later than or equal to the target play time in the underflow video may be obtained, and the continuous multi-frame images are all the images.
In an optional implementation manner, consecutive multi-frame images in all the images of which the corresponding play time in the underflow video is later than or equal to the target play time may be obtained. Illustratively, if all the images include K frames of images, the number of frames of the consecutive multiple frames of images is less than K.
And C2: and acquiring a preset position area corresponding to the target playing visual angle, wherein the position area is the position area of the image under the target playing visual angle, which is located in the image contained in the underflow video.
For example, the electronic device may obtain the position area corresponding to the target play view from the server.
For example, the electronic device may calculate a position area corresponding to the target play view angle based on the total number of the multiple mainstream videos and a preset splicing rule.
The preset splicing rule refers to a rule that processing images corresponding to the ith frame of image included in a plurality of mainstream videos are spliced into the ith frame of image in the underflow video.
The images under the same playing visual angle are positioned in the same position area of different images in the underflow video.
The following describes a position area corresponding to a target playback view angle with reference to an example. As shown in fig. 6, it is assumed that the target playing view corresponds to the main stream video marked with S3, then, a position region corresponding to the target playing view in the ith frame image in the underflow video is a position region where the processing image corresponding to the ith frame image in the main stream video marked with S3 is located, such as a position region where S3-P3 in the image P3 in the underflow video shown in fig. 6 or a position region where S3-P3 in the image P4 in the underflow video is located.
For example, the representation form of the position area is related to the shape of the position area, and if the position area is a rectangle, the position area can be represented by coordinates of four vertices of the rectangle.
Step C3: and respectively intercepting images from the position areas in the continuous multi-frame images to obtain intercepted images respectively corresponding to the continuous multi-frame images.
In summary, since the ith frame of image in the underflow video is obtained by stitching based on the ith frame of image included in each of the multiple main streams, the image at the target playing view angle can be captured from the consecutive multiple frames of images.
The following describes the implementation process of step A3 by way of example.
Fig. 6 is a schematic diagram illustrating an acquisition process of a transition video according to an embodiment of the present application.
For example, it is assumed that the target playing view is a playing view D as shown in fig. 5, the playing view D corresponds to the main stream video of the identifier S3, and the target playing time is the playing time corresponding to the image P2. The transition video includes: and C, intercepting S3-P2 from the image P2 in the underflow video, intercepting S3-P3 from the image P3 in the underflow video and intercepting S3-P4 from the image P4 in the underflow video, namely the continuous multi-frame images obtained in the step C1 are partial images in all images of which the corresponding playing time is later than or equal to the target playing time in the underflow video.
For example, it is assumed that the target playing view is a playing view D as shown in fig. 5, the playing view D corresponds to the main stream video of the identifier S3, and the target playing time is the playing time corresponding to the image P2. The transition video may include: intercepting S3-P2 from an image P2 in the underflow video, intercepting S3-P3 from an image P3 in the underflow video, intercepting S3-P4 from an image P4 in the underflow video, intercepting S3-P5 from an image P5 in the underflow video, and intercepting S3-P6 from an image P6 in the underflow video, as shown in FIG. 6, that is, the continuous multi-frame images obtained in step C1 are all images in the underflow video, the corresponding playing time of which is later than or equal to the target playing time.
For example, if the size of an image obtained by cutting from an image included in the underflow video may be smaller than the size of an image included in the main stream video, the size of the cut image needs to be enlarged to be the same as the size of an image included in the main stream video, and then the cut image is composed into the transition video. As shown in fig. 6, the size of S3-P2, S3-P3, S3-P4, S3-P5, S3-P6 obtained by the truncation is enlarged to be the same as the size of the picture included in the main stream video, and then a transition video 61 is composed.
And C4: and sequencing the obtained intercepted images according to the sequence of the continuous multi-frame images in the underflow video to obtain a transition video.
Illustratively, S3-P2, S3-P3, S3-P4, S3-P5, S3-P6 are ordered in the order of picture P2, picture P3, picture P4, picture P5, and picture P6 in the underflow video to obtain a transition video, wherein the transition video sequentially includes the following pictures from morning to evening in time: S3-P2, S3-P3, S3-P4, S3-P5, S3-P6.
In an alternative implementation manner, in a case that it is determined that the first playback view is switched to the target playback view, the loading of the first main stream video may be suspended because the first main stream video is not played any more. To conserve bandwidth resources.
In an optional implementation manner, if an instruction of exiting to switch the view angle is detected, the acquisition of the underflow video is stopped. To conserve bandwidth resources.
In an optional implementation manner, if the instruction of exiting from the view switching is detected, the electronic device may load a second main stream video corresponding to the target playing view. A specific process may include the following steps D1 to D2.
Step D1: and if the exit visual angle switching instruction is detected, sending a request for acquiring a second main stream video corresponding to the target playing visual angle to a server.
Illustratively, if the play angle switching instruction is not received after the set duration is exceeded, it is determined that the exit angle switching instruction is detected.
For example, the set duration may be determined based on actual conditions, and will not be described herein.
For example, the exit view angle switching instruction may be a voice instruction, a gesture instruction, or a touch instruction.
Step D2: receiving the second mainstream video from a server.
It can be understood that, after receiving the request for obtaining the second mainstream video corresponding to the target playing view angle, the server searches for the second mainstream video corresponding to the target playing view angle, and transmits the second mainstream video to the electronic device. Therefore, the server has a certain response time, and if the network is poor, the electronic device needs a certain time to successfully load the second main stream video.
The method is described in detail in the embodiments disclosed in the present application, and the method of the present application can be implemented by using various types of apparatuses, so that various apparatuses are also disclosed in the present application, and specific embodiments are given below for detailed description.
As shown in fig. 7, a structure diagram of a video playing apparatus provided in an embodiment of the present application is provided, where the apparatus includes: a first playing module 71, a first obtaining module 72, a second obtaining module 73, a third obtaining module 74, a second playing module 75, and a third playing module 76, wherein:
a first playing module 71, configured to play a first mainstream video, where the first mainstream video is any one of multiple mainstream videos, the multiple mainstream videos are videos of a same object at different playing view angles, and the first mainstream video corresponds to a first playing view angle;
a first obtaining module 72, configured to obtain an underflow video, where an ith frame image in the underflow video is obtained by stitching based on an ith frame image in the multiple mainstream videos, and the ith frame image in the underflow video is any frame image in the underflow video;
a second obtaining module 73, configured to, when it is determined that the first playing perspective is switched to the target playing perspective, respectively intercept images at the target playing perspective from consecutive multi-frame images included in the underflow video to obtain a transition video; the minimum playing time of the playing times respectively corresponding to the continuous multi-frame images in the underflow video is equal to or later than a target playing time, and the target playing time is equal to or later than the playing time corresponding to the image displayed under the condition that the first playing visual angle is determined to be switched to the target playing visual angle;
a third obtaining module 74, configured to obtain a second mainstream video, where the second mainstream video corresponds to the target playing view, and a first frame image included in the second mainstream video is a next frame image of a last frame image of the transition video;
a second playing module 75, configured to play the transition video;
and a third playing module 76, configured to play the second main stream video after the transition video is played.
In an optional implementation manner, the method further includes:
the first determining module is used for determining a second playing visual angle to be switched to if the playing visual angle switching instruction is detected;
a display module, configured to display a target image, where the target image is an image corresponding to the second play view in an image at a first time in the underflow video, and the first time is equal to or later than a play time corresponding to an image of the first main stream video that is displayed when the play angle switching instruction is detected;
the triggering module is used for triggering the first determining module;
and the second determining module is used for determining the currently determined second playing visual angle as the target playing visual angle if the instruction of quitting switching the visual angles is detected.
In an optional implementation manner, the method further includes:
and the acquisition stopping module is used for stopping acquiring the underflow video if an instruction of quitting the view angle switching is detected.
In an optional implementation manner, the second obtaining module includes:
a first obtaining unit, configured to obtain the consecutive multi-frame images that are located in the underflow video and are later than or equal to the target play time;
a second obtaining unit, configured to obtain a preset position region corresponding to the target playing view, where the position region is a position region where an image at the target playing view is located in an image included in the underflow video;
the intercepting unit is used for respectively intercepting images from the position areas in the continuous multi-frame images so as to obtain intercepted images respectively corresponding to the continuous multi-frame images;
and the sequencing unit is used for sequencing the obtained intercepted images according to the sequence of the continuous multi-frame images in the underflow video so as to obtain a transition video.
In an optional implementation manner, the method further includes:
the processing module is used for reducing the size of the ith frame image contained in each of the multiple main stream videos and/or reducing the resolution of the ith frame image contained in each of the multiple main stream videos to obtain a processing image corresponding to the ith frame image contained in each of the multiple main stream videos;
and the splicing module is used for splicing the processing images corresponding to the ith frame image contained in the plurality of main stream videos respectively to obtain the ith frame image in the underflow video so as to obtain the underflow video.
As shown in fig. 8, a block diagram of an electronic device provided in the embodiment of the present application is shown, where the electronic device is not limited to: memory 81, processor 82, network interface 83, I/O controller 84, and communication bus 85.
It should be noted that, as those skilled in the art will appreciate, the structure of the electronic device shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown in fig. 8, or may combine some components, or may have different component arrangements.
The following describes each component of the electronic device in detail with reference to fig. 8:
the memory 81 stores programs.
The Memory 81 may include a Memory such as a Random-Access Memory (RAM) 811 and a Read-Only Memory (ROM) 812, and may also include a mass storage device 813 such as at least 1 disk storage. Of course, the electronic device may also include hardware required for other services.
The memory 81 is used for storing the executable instructions of the processor 82. The processor 82 is configured to perform any of the steps of the video playing method embodiments.
A processor 82 configured to execute the program, the program being specifically configured to:
playing a first main stream video, wherein the first main stream video is any one of a plurality of main stream videos, the plurality of main stream videos are videos of a same object under different playing visual angles, and the first main stream video corresponds to a first playing visual angle;
obtaining an underflow video, wherein an ith frame image in the underflow video is obtained by splicing the ith frame images in the plurality of main stream videos, and the ith frame image in the underflow video is any frame image in the underflow video;
under the condition that the first playing visual angle is determined to be switched to a target playing visual angle, images under the target playing visual angle are respectively intercepted from continuous multi-frame images contained in the underflow video to obtain transition videos; the minimum playing time of the playing times respectively corresponding to the continuous multi-frame images in the underflow video is equal to or later than a target playing time, and the target playing time is equal to or later than the playing time corresponding to the image displayed under the condition that the first playing visual angle is determined to be switched to the target playing visual angle;
acquiring a second main stream video, wherein the second main stream video corresponds to the target playing visual angle, and a first frame image contained in the second main stream video is a next frame image of a last frame image of the transition video;
playing the transition video;
and after the transitional video is played, playing the second main stream video.
The processor 82 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 81 and calling data stored in the memory 81, thereby performing overall monitoring of the electronic device. Processor 82 may include one or more processing units; alternatively, the processor 82 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 82.
The processor 82 may be a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement embodiments of the present invention, or the like.
The processor 82, the memory 81, the network interface 83, and the I/O controller 84 may be connected to each other by a communication bus 85, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc.
In an exemplary embodiment, the electronic device may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic elements for performing the above-described video playing method.
In an exemplary embodiment, a computer-readable storage medium comprising instructions, such as a memory 81 comprising instructions, executable by a processor 82 of an electronic device to perform the above-described method is also provided. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, which is directly loadable into an internal memory of a computer, such as the memory 81 described above, and contains software codes, and which, when loaded and executed by the computer, is capable of implementing the method shown in any of the embodiments of the video playback method described above.
Note that the features described in the embodiments in the present specification may be replaced with or combined with each other. For the device or system type embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A video playback method, comprising:
playing a first mainstream video, wherein the first mainstream video is any one of a plurality of mainstream videos, the plurality of mainstream videos are videos of the same object at different playing visual angles, and the first mainstream video corresponds to a first playing visual angle;
obtaining an underflow video, wherein an ith frame image in the underflow video is obtained by splicing the ith frame images in the plurality of main stream videos, and the ith frame image in the underflow video is any frame image in the underflow video;
under the condition that the first playing visual angle is switched to the target playing visual angle, respectively intercepting images under the target playing visual angle from continuous multi-frame images contained in the underflow video to obtain a transition video; the minimum playing time of the playing times respectively corresponding to the continuous multi-frame images in the underflow video is equal to or later than a target playing time, and the target playing time is equal to or later than the playing time corresponding to the image displayed under the condition that the first playing visual angle is determined to be switched to the target playing visual angle;
acquiring a second main stream video, wherein the second main stream video corresponds to the target playing visual angle, and a first frame image contained in the second main stream video is a next frame image of a last frame image of the transition video;
playing the transition video;
and after the transitional video is played, playing the second main stream video.
2. The video playback method of claim 1, wherein the step of detecting the switching from the first playback perspective to the target playback perspective comprises:
if a play visual angle switching instruction is detected, determining a second play visual angle to be switched to;
displaying a target image, wherein the target image is an image corresponding to the second playing view angle in an image at a first moment in the underflow video, and the first moment is equal to or later than the playing moment corresponding to the image of the first main stream video displayed when the play angle switching instruction is detected;
if the step of returning detects a play visual angle switching instruction, determining a second play visual angle to be switched to;
and if the instruction of quitting the view angle switching is detected, determining the second playing view angle which is determined currently as the target playing view angle.
3. The video playing method according to claim 1, further comprising:
and if the instruction of quitting the view angle switching is detected, stopping acquiring the underflow video.
4. The video playing method according to any one of claims 1 to 3, wherein said step of respectively truncating the images at the target playing perspective from the consecutive multi-frame images included in the underflow video comprises:
obtaining the continuous multiframe images which are positioned in the underflow video and are later than or equal to the target playing time;
acquiring a preset position area corresponding to the target playing visual angle, wherein the position area is a position area of an image under the target playing visual angle, which is located in an image contained in the underflow video;
respectively intercepting images from the position areas in the continuous multi-frame images to obtain intercepted images respectively corresponding to the continuous multi-frame images;
and sequencing the obtained intercepted images according to the sequence of the continuous multi-frame images in the underflow video so as to obtain a transition video.
5. The video playing method according to any one of claims 1 to 3, wherein the step of obtaining the underflow video comprises:
for the ith frame image in the underflow video, reducing the size of the ith frame image contained in each of the plurality of main stream videos, and/or reducing the resolution to obtain a processing image corresponding to the ith frame image contained in each of the plurality of main stream videos;
and splicing the processing images corresponding to the ith frame image contained in the plurality of main stream videos respectively to obtain the ith frame image in the underflow video so as to obtain the underflow video.
6. A video playback apparatus, comprising:
the first playing module is used for playing a first mainstream video, wherein the first mainstream video is any one of a plurality of mainstream videos, the plurality of mainstream videos are videos of the same object at different playing visual angles, and the first mainstream video corresponds to a first playing visual angle;
the first obtaining module is used for obtaining an underflow video, wherein the ith frame image in the underflow video is obtained by splicing the ith frame images in the plurality of mainstream videos, and the ith frame image in the underflow video is any frame image in the underflow video;
the second acquisition module is used for respectively intercepting images under a target playing visual angle from continuous multi-frame images contained in the underflow video under the condition that the first playing visual angle is determined to be switched to the target playing visual angle so as to obtain a transition video; the minimum playing time of the playing times respectively corresponding to the continuous multi-frame images in the underflow video is equal to or later than a target playing time, and the target playing time is equal to or later than the playing time corresponding to the image displayed under the condition that the first playing visual angle is determined to be switched to the target playing visual angle;
a third obtaining module, configured to obtain a second mainstream video, where the second mainstream video corresponds to the target playing view, and a first frame image included in the second mainstream video is a next frame image of a last frame image of the transition video;
the second playing module is used for playing the transition video;
and the third playing module is used for playing the second main stream video after the transitional video is played.
7. The video playback device of claim 6, wherein the second obtaining module comprises:
the first acquisition unit is used for acquiring the continuous multi-frame images which are positioned in the underflow video and are later than or equal to the target playing time;
a second obtaining unit, configured to obtain a preset position region corresponding to the target playing view, where the position region is a position region where an image at the target playing view is located in an image included in the underflow video;
the intercepting unit is used for respectively intercepting images from the position areas in the continuous multi-frame images so as to obtain the intercepted images respectively corresponding to the continuous multi-frame images;
and the sequencing unit is used for sequencing the obtained intercepted images according to the sequence of the continuous multi-frame images in the underflow video so as to obtain a transition video.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video playback method of any of claims 1 to 5.
9. A computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the video playback method of any one of claims 1 to 5.
CN202110512119.7A 2021-05-11 2021-05-11 Video playing method, device, electronic equipment, medium and product Active CN113259770B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110512119.7A CN113259770B (en) 2021-05-11 2021-05-11 Video playing method, device, electronic equipment, medium and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110512119.7A CN113259770B (en) 2021-05-11 2021-05-11 Video playing method, device, electronic equipment, medium and product

Publications (2)

Publication Number Publication Date
CN113259770A CN113259770A (en) 2021-08-13
CN113259770B true CN113259770B (en) 2022-11-18

Family

ID=77222748

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110512119.7A Active CN113259770B (en) 2021-05-11 2021-05-11 Video playing method, device, electronic equipment, medium and product

Country Status (1)

Country Link
CN (1) CN113259770B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113949884A (en) * 2021-09-02 2022-01-18 北京大学深圳研究生院 Multi-view video data processing method, device and storage medium
CN113938711A (en) * 2021-10-13 2022-01-14 北京奇艺世纪科技有限公司 Visual angle switching method and device, user side, server and storage medium
CN114554292A (en) * 2022-02-21 2022-05-27 北京字节跳动网络技术有限公司 Method and device for switching visual angle, electronic equipment, storage medium and program product

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018223554A1 (en) * 2017-06-08 2018-12-13 简极科技有限公司 Multi-source video clipping and playing method and system
CN111447461A (en) * 2020-05-20 2020-07-24 上海科技大学 Synchronous switching method, device, equipment and medium for multi-view live video

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080268961A1 (en) * 2007-04-30 2008-10-30 Michael Brook Method of creating video in a virtual world and method of distributing and using same
CN111510782A (en) * 2017-04-28 2020-08-07 华为技术有限公司 Video playing method, virtual reality equipment, server and computer storage medium
CN110351607B (en) * 2018-04-04 2022-01-14 阿里巴巴(中国)有限公司 Method for switching panoramic video scenes, computer storage medium and client

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018223554A1 (en) * 2017-06-08 2018-12-13 简极科技有限公司 Multi-source video clipping and playing method and system
CN111447461A (en) * 2020-05-20 2020-07-24 上海科技大学 Synchronous switching method, device, equipment and medium for multi-view live video

Also Published As

Publication number Publication date
CN113259770A (en) 2021-08-13

Similar Documents

Publication Publication Date Title
CN113259770B (en) Video playing method, device, electronic equipment, medium and product
CN111970524B (en) Control method, device, system, equipment and medium for interactive live broadcast and microphone connection
CN113076048B (en) Video display method and device, electronic equipment and storage medium
JP6505327B2 (en) Method, apparatus and system for acquiring video data and computer readable storage medium
CN104754223A (en) Method for generating thumbnail and shooting terminal
WO2023169305A1 (en) Special effect video generating method and apparatus, electronic device, and storage medium
US11153473B2 (en) Control method, device and electronic apparatus for image acquisition
EP4329285A1 (en) Video photographing method and apparatus, electronic device, and storage medium
CN114722320A (en) Page switching method and device and interaction method of terminal equipment
CN111277728A (en) Video detection method and device, computer-readable storage medium and electronic device
CN113010135B (en) Data processing method and device, display terminal and storage medium
CN112367465B (en) Image output method and device and electronic equipment
CN113438550B (en) Video playing method, video conference method, live broadcasting method and related devices
CN110809166B (en) Video data processing method and device and electronic equipment
CN110602410B (en) Image processing method and device, aerial camera and storage medium
US20210051276A1 (en) Method and apparatus for providing video in portable terminal
CN112449243B (en) Video processing method, device, equipment and storage medium
CN110489040B (en) Method and device for displaying feature model, terminal and storage medium
CN114501136A (en) Image acquisition method and device, mobile terminal and storage medium
CN114339071A (en) Image processing circuit, image processing method and electronic device
CN112291474A (en) Image acquisition method and device and electronic equipment
WO2018072056A1 (en) Sharing network content
CN113794836B (en) Bullet time video generation method, device, system, equipment and medium
CN114554133B (en) Information processing method and device and electronic equipment
CN112668474B (en) Plane generation method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant