CN110198457B - Video playing method and device, system, storage medium, terminal and server thereof - Google Patents

Video playing method and device, system, storage medium, terminal and server thereof Download PDF

Info

Publication number
CN110198457B
CN110198457B CN201810161973.1A CN201810161973A CN110198457B CN 110198457 B CN110198457 B CN 110198457B CN 201810161973 A CN201810161973 A CN 201810161973A CN 110198457 B CN110198457 B CN 110198457B
Authority
CN
China
Prior art keywords
playing
video
video data
parameter
recording
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810161973.1A
Other languages
Chinese (zh)
Other versions
CN110198457A (en
Inventor
刘艳峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810161973.1A priority Critical patent/CN110198457B/en
Publication of CN110198457A publication Critical patent/CN110198457A/en
Application granted granted Critical
Publication of CN110198457B publication Critical patent/CN110198457B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the invention discloses a video playing method, and equipment, a system, a storage medium, a terminal and a server thereof, wherein the method comprises the following steps: when the video playing terminal detects a three-dimensional video playing signal input aiming at a target program, the video playing terminal acquires a playing view angle parameter carried by the three-dimensional video playing signal and a program identifier of the target program and sends a video acquiring request carrying the program identifier and the playing view angle parameter to a server; the server receives the video acquisition request, acquires a program identifier and a playing view angle parameter in the video acquisition request, and acquires playing video data based on the program identifier and the playing view angle parameter; the server carries out three-dimensional synthesis processing on the played video data, acquires a first storage address of the played video data after the three-dimensional synthesis processing, and sends the first storage address to the video playing terminal; and the video playing terminal plays the played video data after the three-dimensional synthesis processing based on the first storage address. By adopting the invention, the video playing form is enriched.

Description

Video playing method and device, system, storage medium, terminal and server thereof
Technical Field
The present invention relates to the field of internet technologies, and in particular, to a video playing method, and an apparatus, a system, a storage medium, a terminal, and a server thereof.
Background
People have a stereoscopic feeling when seeing actual objects in a natural state, and can distinguish the distance of the objects because the angles of the objects observed by the two eyes of people are slightly different, and the difference of two different pictures entering the left eye and the right eye is fused into one picture through the brain to generate stereoscopic impression. The three-dimensional (3D) stereoscopic video television just utilizes the principle to play the specially processed picture, and then uses 3D glasses to simulate the human eye imaging principle, so that the viewed video is three-dimensional.
At present, video telephones and video televisions mostly adopt two-dimensional (2D, two-dimensional) display as a main display, for example, live video is taken as an example, the seen live broadcast pictures are processed planar two-dimensional images, the images are limited on the plane of a screen, no stereoscopic and vivid feeling exists, live broadcast content is real-time video stream transmission based on single-view video, and a user can only passively accept the view angle limited by live broadcast when watching the live broadcast, so that the problem of single video playing mode exists.
Disclosure of Invention
The embodiment of the invention provides a video playing method, a device, a system, a storage medium, a terminal and a server thereof, which can play a 3D video based on a visual angle selected by a user, enrich the video playing form and enhance the interactivity of the video playing device and the user.
A first aspect of an embodiment of the present invention provides a video playing method, which may include:
when a video playing terminal detects a three-dimensional video playing signal input aiming at a target program, the video playing terminal acquires a playing visual angle parameter carried by the three-dimensional video playing signal and a program identifier of the target program, and sends a video acquiring request carrying the program identifier and the playing visual angle parameter to a server;
the server receives the video acquisition request, acquires the program identification and the playing view angle parameter in the video acquisition request, and acquires playing video data based on the program identification and the playing view angle parameter;
the server carries out three-dimensional synthesis processing on the played video data, acquires a first storage address of the played video data after the three-dimensional synthesis processing, and sends the first storage address to the video playing terminal;
and the video playing terminal plays the played video data after the three-dimensional synthesis processing based on the first storage address.
A second aspect of the embodiments of the present invention provides a video playing method, which may include:
when a three-dimensional video playing signal input aiming at a target program is detected, acquiring a playing visual angle parameter carried by the three-dimensional video playing signal and a program identifier of the target program;
sending a video acquisition request carrying the program identifier and the playing view angle parameter to a server, so that the server acquires playing video data based on the program identifier and the playing view angle parameter, performs three-dimensional synthesis processing on the playing video data, and acquires a first storage address of the playing video data after the three-dimensional synthesis processing;
and playing the playing video data after the three-dimensional synthesis processing based on the first storage address.
Wherein the method further comprises:
when detecting a view angle parameter switching signal input for a recording view angle parameter set sent by the server, acquiring a view angle switching parameter carried by the view angle parameter switching signal;
sending a video switching request carrying the view switching parameter to the server so that the server acquires switching video data based on the view switching parameter, and acquiring a second storage address of the switching video data after three-dimensional synthesis processing after the switching video data is subjected to three-dimensional synthesis processing;
and playing the switching video data after the three-dimensional synthesis processing based on the second storage address.
A third aspect of the embodiments of the present invention provides a video playing method, which may include:
receiving a video acquisition request sent by a video playing terminal when a three-dimensional video playing signal input aiming at a target program is detected, and acquiring a program identifier and a playing view angle parameter of the target program in the video acquisition request;
acquiring playing video data based on the program identification and the playing visual angle parameter;
and performing three-dimensional synthesis processing on the played video data, acquiring a first storage address of the played video data after the three-dimensional synthesis processing, and sending the first storage address to the video playing terminal, so that the video playing terminal plays the played video data after the three-dimensional synthesis processing based on the first storage address.
Wherein the acquiring of the playing video data based on the program identifier and the playing view angle parameter includes:
receiving a recorded video data set corresponding to the program identification acquired by a video recording terminal in real time;
and selecting the playing video data corresponding to the playing visual angle parameter from the recorded video data set.
Wherein, the selecting the playing video data corresponding to the playing view angle parameter in the recorded video data set includes:
acquiring recording visual angle parameters corresponding to each recording video data in the recording video data set to generate a recording visual angle parameter set;
acquiring a first recording visual angle parameter matched with the playing visual angle parameter and a second recording visual angle parameter which is adjacent to the first recording visual angle parameter left and right from the recording visual angle parameter set;
and acquiring target recorded video data corresponding to the first recorded view angle parameter and the second recorded view angle parameter from the recorded video data set, and taking the target recorded video data as playing video data.
After obtaining the recording view angle parameters corresponding to each recorded video data in the recorded video data set to generate a recorded view angle parameter set, the method further includes:
and sending the recording visual angle parameter set to the video playing terminal so that the video playing terminal receives and displays the recording visual angle parameter set.
Wherein the method further comprises:
receiving a video switching request sent by the video playing terminal when detecting a view parameter switching signal input for the recording view parameter set, and acquiring a view switching parameter in the video switching request;
acquiring switching video data based on the view switching parameter;
and performing three-dimensional synthesis processing on the switching video data, acquiring a second storage address of the switching video data after the three-dimensional synthesis processing, and sending the second storage address to the video playing terminal so that the video playing terminal plays the switching video data after the three-dimensional synthesis processing based on the second storage address.
Wherein, after receiving the recorded video data set corresponding to the program identifier collected by the video recording terminal in real time, the method further comprises:
converting the recorded video data set into a code stream data set for storage;
the selecting the playing video data corresponding to the playing view angle parameter from the recorded video data set includes:
and selecting the playing code stream data corresponding to the playing visual angle parameter from the code stream data set.
A fourth aspect of the present invention provides a video playing system, which may include a video playing terminal and a server, where:
the video playing terminal is used for acquiring playing view angle parameters carried by three-dimensional video playing signals and program identifiers of target programs when the three-dimensional video playing signals input aiming at the target programs are detected, and sending video acquiring requests carrying the program identifiers and the playing view angle parameters to a server;
the server is used for receiving the video acquisition request, acquiring the program identifier and the playing view angle parameter in the video acquisition request, and acquiring playing video data based on the program identifier and the playing view angle parameter;
the server is further used for performing three-dimensional synthesis processing on the played video data, acquiring a first storage address of the played video data after the three-dimensional synthesis processing, and sending the first storage address to the video playing terminal;
and the video playing terminal is also used for playing the played video data after the three-dimensional synthesis processing based on the first storage address.
A fifth aspect of an embodiment of the present invention provides a video playback device, which may include:
the information acquisition unit is used for acquiring a playing view angle parameter carried by a three-dimensional video playing signal and a program identifier of a target program when the three-dimensional video playing signal input aiming at the target program is detected;
a request sending unit, configured to send a video acquisition request carrying the program identifier and the playing view parameter to a server, so that the server acquires playing video data based on the program identifier and the playing view parameter, performs three-dimensional synthesis on the playing video data, and acquires a first storage address of the playing video data after the three-dimensional synthesis;
and the data playing unit is used for playing the played video data after the three-dimensional synthesis processing based on the first storage address.
A sixth aspect of embodiments of the present invention provides a computer storage medium having stored thereon a plurality of instructions adapted to be loaded by a processor and to carry out the method steps of the second aspect.
A seventh aspect of the embodiments of the present invention provides a video playback terminal, which may include: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the steps of:
when a three-dimensional video playing signal input aiming at a target program is detected, acquiring a playing visual angle parameter carried by the three-dimensional video playing signal and a program identifier of the target program;
sending a video acquisition request carrying the program identifier and the playing view angle parameter to a server, so that the server acquires playing video data based on the program identifier and the playing view angle parameter, performs three-dimensional synthesis processing on the playing video data, and acquires a first storage address of the playing video data after the three-dimensional synthesis processing;
and playing the playing video data after the three-dimensional synthesis processing based on the first storage address.
An eighth aspect of the embodiments of the present invention provides a video playback device, which may include: the device comprises a parameter acquisition unit, a parameter acquisition unit and a parameter display unit, wherein the parameter acquisition unit is used for receiving a video acquisition request sent by a video playing terminal when detecting a three-dimensional video playing signal input aiming at a target program, and acquiring a program identifier and a playing view angle parameter of the target program in the video acquisition request;
the data acquisition unit is used for acquiring playing video data based on the program identification and the playing visual angle parameter;
the address acquisition unit is used for performing three-dimensional synthesis processing on the played video data, acquiring a first storage address of the played video data after the three-dimensional synthesis processing, and sending the first storage address to the video playing terminal, so that the video playing terminal plays the played video data after the three-dimensional synthesis processing based on the first storage address.
A ninth aspect of embodiments of the present invention provides a computer storage medium having stored thereon a plurality of instructions adapted to be loaded by a processor and to carry out the method steps of the third aspect described above.
A tenth aspect of the embodiments of the present invention provides a server, which may include: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the steps of:
receiving a video acquisition request sent by a video playing terminal when a three-dimensional video playing signal input aiming at a target program is detected, and acquiring a program identifier and a playing view angle parameter of the target program in the video acquisition request;
acquiring playing video data based on the program identification and the playing visual angle parameter;
and performing three-dimensional synthesis processing on the played video data, acquiring a first storage address of the played video data after the three-dimensional synthesis processing, and sending the first storage address to the video playing terminal, so that the video playing terminal plays the played video data after the three-dimensional synthesis processing based on the first storage address.
In the embodiment of the invention, when a video playing terminal detects a three-dimensional video playing signal input aiming at a target program, the video playing terminal acquires a playing visual angle parameter carried by the three-dimensional video playing signal and a program identifier of the target program, and sends a video acquiring request carrying the program identifier and the playing visual angle parameter to a server, after receiving the request, the server acquires playing video data based on the program identifier and the playing visual angle parameter, performs three-dimensional synthesis processing on the playing video data, and generates a first storage address of the playing video data after the three-dimensional synthesis processing and provides the first storage address to the video playing terminal, so that the video playing terminal plays the playing video data after the three-dimensional synthesis processing based on the first storage address. The video playing terminal generates the request, the server inquires and plays video data based on the request, 3D synthesis processing is carried out, and then the video playing terminal plays the video data, so that the 3D video is played based on the visual angle selected by the user, the video playing mode is enriched, the interactivity of the video playing equipment and the user is enhanced, and the user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a video playing method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of another video playing method according to an embodiment of the present invention;
fig. 3 is a schematic interface diagram of a program recording site according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of another video playing method according to an embodiment of the present invention;
fig. 5 is a schematic flowchart of another video playing method according to an embodiment of the present invention;
fig. 6 is a schematic flowchart of another video playing method according to an embodiment of the present invention;
fig. 7 is a schematic flowchart of another video playing method according to an embodiment of the present invention;
fig. 8 is a schematic flowchart of another video playing method according to an embodiment of the present invention;
fig. 9 is a schematic flowchart of another video playing method according to an embodiment of the present invention;
fig. 10 is a schematic flowchart of another video playing method according to an embodiment of the present invention;
fig. 11 is a schematic flowchart of another video playing method according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of a video playing system according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of a video playing device according to an embodiment of the present invention;
fig. 14 is a schematic structural diagram of another video playback device according to an embodiment of the present invention;
fig. 15 is a schematic structural diagram of a data acquisition unit according to an embodiment of the present invention;
FIG. 16 is a schematic structural diagram of another data acquisition unit provided in the embodiment of the present invention;
fig. 17 is a schematic structural diagram of a video playback terminal according to an embodiment of the present invention;
fig. 18 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The video playing method provided by the embodiment of the invention can be applied to a scene of malicious attack behavior identification, for example, when a video playing terminal detects a three-dimensional video playing signal input aiming at a target program, the video playing terminal acquires a playing visual angle parameter carried by the three-dimensional video playing signal and a program identifier of the target program, and sends a video acquiring request carrying the program identifier and the playing visual angle parameter to a server, after receiving the request, the server acquires playing video data based on the program identifier and the playing visual angle parameter, performs three-dimensional synthesis processing on the playing video data, and generates a first storage address of the playing video data after the three-dimensional synthesis processing and provides the first storage address to the video playing terminal, so that the video playing terminal plays the playing video data after the three-dimensional synthesis processing based on the first storage address. The video playing terminal generates the request, the server inquires and plays video data based on the request, 3D synthesis processing is carried out, and then the video playing terminal plays the video data, so that the 3D video is played based on the visual angle selected by the user, the video playing mode is enriched, the interactivity of the video playing equipment and the user is enhanced, and the user experience is improved.
The video playing terminal related to the embodiment of the present invention may be a 3D television terminal device or a mobile terminal having a 3D video playing function, which is accessed to the internet through a set top box or a computer to implement services such as a digital television, a time-shift television, an interactive television, etc., and may be a terminal device such as a smart phone, a tablet computer, a Mobile Internet Device (MID), etc., and the server related to the embodiment of the present invention may be a computer device having management resources and providing data support for the video playing terminal, for example, may be a server having strong data carrying capacity and processing capacity.
The following describes in detail a video playing method provided by an embodiment of the present invention with reference to fig. 1 to 11.
Referring to fig. 1, a flow diagram of a video playing method is provided in the embodiment of the present invention, which is described with a video playing terminal and a server as two sides, and as shown in fig. 1, the method in the embodiment of the present invention may include the following steps S101 to S104.
S101, when a video playing terminal detects a three-dimensional video playing signal input aiming at a target program, the video playing terminal acquires a playing visual angle parameter carried by the three-dimensional video playing signal and a program identifier of the target program, and sends a video acquiring request carrying the program identifier and the playing visual angle parameter to a server;
it is to be understood that the playing view parameter may include one of the view parameters playable by the user in the target program, and may also include a default view parameter when not selected by the user, that is, a set view parameter (e.g., a main view parameter). The playing view angle parameter is determined by a video recording terminal, that is, at least one video recording terminal is adopted to record in a recording site of a target program, each video recording terminal adopts one recording view angle parameter, and the recording view angle parameter is the playing view angle parameter after being sent to the video playing terminal. Optionally, the playing view parameter may be displayed by the video playing terminal, for example, in a list form.
The program identifier is used for uniquely identifying the target program, and may be the name of the target program, or the broadcast channel and broadcast time of the target program.
In the specific implementation, when a user performs a touch operation on a video playing interface of the video playing terminal, inputs voice information through a receiver of the video playing terminal, inputs gesture actions through a camera of the video playing terminal, and the like, the video playing terminal identifies the received operation and identifies the operation as a three-dimensional video playing signal for a target program, analyzes the signal, extracts information such as a playing view angle parameter carried by the signal and a program identifier of the target program, generates a video acquisition request based on the extracted information, and sends the generated request to a server. The three-dimensional video playing signal can be a three-dimensional video live broadcasting signal and can also be a three-dimensional video recording and playing signal. It should be noted that the video playing terminal may support a 3D playing mode and a 2D playing mode at the same time, and in a feasible implementation manner, the acquired three-dimensional video playing signal is determined by selecting a touch key of the 3D playing mode.
Where 3D is a concept relative to a 2D plane, with three dimensions: height, width, and depth. The world in which humans live is a three-dimensional space, and objects observed in the real world also have three dimensions. The 3D image is consistent with the scene of the habit of people in real life, and is more three-dimensional and vivid; meanwhile, the stereoscopic impression and the depth of field of the 3D image enable a viewer to have an immersive sensation, and the immersive sensation is strong; in addition, various strong visual impacts can be made by utilizing the characteristics of the 3D images, such as live sports events, live concert and various macro movie scenes, and the visual impact effect is strong.
S102, the server receives the video acquisition request, acquires the program identifier and the playing view angle parameter in the video acquisition request, and acquires playing video data based on the program identifier and the playing view angle parameter;
it is understood that the video data source for playing the video data as the target program may be a video stream or a code stream.
In the specific implementation, after receiving a video acquisition request sent by a video playing terminal, a server analyzes the request, extracts a program identifier and a playing view angle parameter carried by the request, simultaneously receives a recorded video data set corresponding to the program identifier acquired by a video recording terminal in real time, and then selects playing video data corresponding to the playing view angle parameter from the recorded video data set. Or after extracting the program identifier and the playing angle of view parameter carried by the request, searching a recorded video data set corresponding to the program identifier in a cached database, and then selecting the playing video data corresponding to the playing angle of view parameter from the recorded video data set.
S103, the server carries out three-dimensional synthesis processing on the played video data, acquires a first storage address of the played video data after the three-dimensional synthesis processing, and sends the first storage address to the video playing terminal;
it is understood that, since the video data includes a plurality of frames of image data, the three-dimensional composition processing of the video data is actually three-dimensional composition processing of an image. The processing process of the existing three-dimensional image synthesis algorithm mainly comprises the steps of image cutting, optimal illumination surface searching, light source direction estimation and image synthesis. Currently, there are several common 3D synthesis techniques: 1) the color difference type 3D stereo imaging has low technical difficulty and low cost, but the 3D image quality effect is not ideal, and the image and the picture edge are easy to color cast; 2) the shutter type 3D technology has relatively more resources, great propaganda and promotion force of manufacturers, excellent 3D effect and high price of shutter glasses; 3) the polarized light 3D technology has the advantages that polarized light glasses are low in price, excellent in 3D effect and large in market share, but installation and debugging are complex, cost is not low, picture resolution is reduced by half, and full high definition is difficult to achieve; 4) the naked eye type 3D technology can ensure the resolution and the light transmittance, the existing design framework is not influenced, the 3D display effect is excellent, the technology is still developed, and the product is immature; 5) holography is a perfect 3D technology in theory, and 3D images with different angles can be obtained when the images are viewed from different angles. The hologram can be applied to nondestructive industrial inspection, ultrasonic holography, holographic microscopy, holographic memory, holographic film and television. However, due to the complexity of the technology, holography has not been commercially applied in the above-mentioned fields. In the embodiment of the present invention, any one or more of the above 3D synthesis techniques may be used to perform three-dimensional synthesis processing on the acquired playing video data.
The playing video data comprises recording video data corresponding to playing visual angle parameters and recording video data corresponding to recording visual angle parameters adjacent to the playing visual angle parameters.
In the specific implementation, the server performs data conversion on the video recording devices on two sides of the selected visual angle and the multiple paths of video streams of the video recording devices with the selected visual angle by adopting a three-dimensional synthesis processing technology to obtain parallel data of the multiple paths of video streams, performs 3D synthesis on the parallel data, stores the 3D synthesized data, generates a first storage address, and provides the first storage address for the video playing terminal. The data conversion and 3D synthesis of the multiple video streams are performed on each frame of image data, that is, the data conversion and 3D synthesis are performed on the current image data corresponding to the playing angle of view parameter and the current image data corresponding to the angle of view parameters on the left and right sides of the playing angle of view parameter, and then the 3D synthesis processing is performed on the next frame of image data in the same manner, and the processing is sequentially performed until all the playing video data are processed.
And S104, the video playing terminal plays the played video data after the three-dimensional synthesis processing based on the first storage address.
In the specific implementation, the video playing terminal calls the playing video data corresponding to the first storage address, and loads and displays the playing video data by adopting an image display technology and an audio playing technology.
In the embodiment of the invention, when a video playing terminal detects a three-dimensional video playing signal input aiming at a target program, the video playing terminal acquires a playing visual angle parameter carried by the three-dimensional video playing signal and a program identifier of the target program, and sends a video acquiring request carrying the program identifier and the playing visual angle parameter to a server, after receiving the request, the server acquires playing video data based on the program identifier and the playing visual angle parameter, performs three-dimensional synthesis processing on the playing video data, and generates a first storage address of the playing video data after the three-dimensional synthesis processing and provides the first storage address to the video playing terminal, so that the video playing terminal plays the playing video data after the three-dimensional synthesis processing based on the first storage address. The video playing terminal generates the request, the server inquires and plays video data based on the request, 3D synthesis processing is carried out, and then the video playing terminal plays the video data, so that the 3D video is played based on the visual angle selected by the user, the video playing mode is enriched, the interactivity of the video playing equipment and the user is enhanced, and the user experience is improved.
Referring to fig. 2, a schematic flow chart of another video playing method is provided in the embodiment of the present invention, which is described with a video playing terminal and a server side, and as shown in fig. 2, the method in the embodiment of the present invention may include the following steps S201 to S213.
S201, when a video playing terminal detects a three-dimensional video playing signal input aiming at a target program, the video playing terminal acquires a playing visual angle parameter carried by the three-dimensional video playing signal and a program identifier of the target program, and sends a video acquiring request carrying the program identifier and the playing visual angle parameter to a server;
it is to be understood that the playing view parameter may include one of the view parameters playable by the user in the target program, and may also include a default view parameter when not selected by the user, that is, a set view parameter (e.g., a main view parameter). The playing view angle parameter is determined by a video recording terminal, that is, at least one video recording terminal is adopted to record in a recording site of a target program, each video recording terminal adopts one recording view angle parameter, and the recording view angle parameter is the playing view angle parameter after being sent to the video playing terminal. For example, as shown in fig. 3, a target program recording site includes 7 video recording terminals (cameras), and each camera has one recording view angle parameter, that is, there are 7 selectable playing view angle parameters for the target program, where the recording view angle parameter corresponding to the camera No. 4 located right in front of the 7 cameras can be used as the default view angle parameter.
The program identifier is used for uniquely identifying the target program, and may be the name of the target program, or the broadcast channel and broadcast time of the target program.
In the specific implementation, when a user performs a touch operation on a video playing interface of the video playing terminal, inputs voice information through a receiver of the video playing terminal, inputs gesture actions through a camera of the video playing terminal, and the like, the video playing terminal identifies the received operation and identifies the operation as a three-dimensional video playing signal for a target program, analyzes the signal, extracts information such as a playing view angle parameter carried by the signal and a program identifier of the target program, generates a video acquisition request based on the extracted information, and sends the generated request to a server. The three-dimensional video playing signal can be a three-dimensional video live broadcasting signal and can also be a three-dimensional video recording and playing signal. It should be noted that the video playing terminal can simultaneously support a 3D playing mode and a 2D playing mode, and in a feasible implementation manner, the acquired three-dimensional video playing signal is determined by selecting a touch key of the 3D playing mode.
Where 3D is a concept relative to a 2D plane, with three dimensions: height, width, and depth. The world in which humans live is a three-dimensional space, and objects observed in the real world also have three dimensions. The 3D image is consistent with the scene of the habit of people in real life, and is more three-dimensional and vivid; meanwhile, the stereoscopic impression and the depth of field of the 3D image enable a viewer to have an immersive sensation, and the immersive sensation is strong; in addition, various strong visual impacts can be made by utilizing the characteristics of the 3D images, such as live sports events, live concert and various macro movie scenes, and the visual impact effect is strong.
S202, the server receives the video acquisition request, acquires the program identification and the playing view angle parameter in the video acquisition request, and receives a recorded video data set corresponding to the program identification acquired by a video recording terminal in real time;
it can be understood that the video recording terminal collects and records video data in real time at a program recording site, and sends the collected data to the server for storage in real time. That is, the recorded video data is live data.
In the specific implementation, after receiving a video acquisition request sent by a video playing terminal, a server analyzes the request, extracts a program identifier and a playing view angle parameter carried by the request, and searches a recorded video data set corresponding to the program identifier in the received recorded video data, or requests a real-time recorded video data set from the video recording terminal corresponding to the program identifier.
For example, if the target program is identified as "news simulcast", the server searches for live data of "news simulcast" from the received multiple live video data, or directly requests the live data from the video recording terminal of "news simulcast".
S203, the server acquires recording visual angle parameters corresponding to each recording video data in the recording video data set to generate a recording visual angle parameter set;
for example, the recording view parameters corresponding to cameras No. 1, 2, 3 …, and 7 shown in fig. 3 are a1, a2, A3, …, and a7, respectively, and then the recording view parameter set includes these 7 recording view parameters. It should be noted that each recording view parameter corresponds to a camera identifier. The camera identification may include a camera model, number, LOGO, etc.
S204, the server sends the recording view angle parameter set to the video playing terminal;
s205, the video playing terminal receives and displays the recording visual angle parameter set;
it can be understood that, the video playing terminal may display each recording view parameter in the received recording view parameter set by using a set display rule, so that the user may select a view to be watched from the displayed recording view parameters. As shown in tabular form.
S206, the server acquires a first recording visual angle parameter matched with the playing visual angle parameter and a second recording visual angle parameter which is adjacent to the first recording visual angle parameter left and right from the recording visual angle parameter set;
in the specific implementation, the server traverses each recording view parameter in the recording view parameter set, matches the currently traversed recording view parameter with the playing view parameter, if the matching is successful, the currently traversed recording view parameter is used as a first recording view parameter, if the matching is failed, the next recording view parameter is continuously traversed, and the matching is performed again until the successfully matched recording view parameter is found. And meanwhile, taking the recording visual angle parameters adjacent to the left and right sides of the first recording visual angle parameter as second recording visual angle parameters. For example, as shown in fig. 3, if the selected playing angle of view parameter is a2 angle corresponding to camera No. 2, then the recording angle of view parameters a1 and A3 corresponding to camera No. 1 and camera No. 3 are used as the second recording angle of view parameter.
The server can analyze each recording visual angle parameter in the recording visual angle parameter set so as to match left and right adjacent visual angle parameters of the current visual angle parameter, and cache the parameters, so that the parameters can be quickly acquired in the cache when searching for the second recording visual angle parameter.
For example, for the cameras in fig. 3, by analyzing the view angle parameter of each camera, the position relationship of the recording view angle parameters shown in table 1 may be matched, where the current view angle parameter corresponds to the first recording view angle parameter, and the left view angle parameter and the right view angle parameter correspond to the second recording view angle parameter.
TABLE 1
Current view angle parameter Left side view angle parameter Right side view angle parameter
A1 A2 Is free of
A2 A3 A1
A3 A4 A2
A4 A5 A3
A5 A6 A4
A6 A7 A5
A7 Is free of A6
S207, the server acquires target recorded video data corresponding to the first recorded view angle parameter and the second recorded view angle parameter from the recorded video data set, and the target recorded video data is used as playing video data;
it can be understood that, since the recorded video data corresponds to the recorded view angle parameters one to one, the corresponding recorded video data, i.e., the played video data, can be quickly acquired based on the found first recorded view angle parameter and the second recorded view angle parameter.
In an optional embodiment, as shown in fig. 4, after the server receives a recorded video data set corresponding to the program identifier collected by the video recording terminal in real time, the method further includes:
s301, the server converts the recorded video data set into a code stream data set for storage;
it can be understood that converting the recorded video data set into the stream data set means converting the video stream that has been compressed and encoded into another video stream to adapt to different network bandwidths, different terminal processing capabilities, and different user requirements. Transcoding is essentially a process of decoding first and then encoding, so the code stream before and after conversion may or may not conform to the same video encoding standard. As shown in fig. 3, after the recorded video data is received by the server, the recorded video data is stored in the database DB by video transcoding.
The server selects the playing video data corresponding to the playing view angle parameter from the recorded video data set, and the method comprises the following steps:
s302, the server selects the playing code stream data corresponding to the playing view angle parameter from the code stream data set.
It can be understood that after the recorded video data is transcoded and stored, the video source is changed into a code stream data set of another form, and then the code stream data corresponding to the first recording view angle parameter and the second recording view angle parameter is searched in the code stream data set, and the searched code stream data is used as playing code stream data.
S208, the server carries out three-dimensional synthesis processing on the played video data, acquires a first storage address of the played video data after the three-dimensional synthesis processing, and sends the first storage address to the video playing terminal;
it is understood that, since the video data includes a plurality of frames of image data, the three-dimensional composition processing of the video data is actually three-dimensional composition processing of an image. The processing process of the existing three-dimensional image synthesis algorithm mainly comprises the steps of image cutting, optimal illumination surface searching, light source direction estimation and image synthesis. Currently, there are several common 3D synthesis techniques: 1) the color difference type 3D stereo imaging has low technical difficulty and low cost, but the 3D image quality effect is not ideal, and the image and the picture edge are easy to color cast; 2) the shutter type 3D technology has relatively more resources, great propaganda and promotion force of manufacturers, excellent 3D effect and high price of shutter glasses; 3) the polarized light 3D technology has the advantages that polarized light glasses are low in price, excellent in 3D effect and large in market share, but installation and debugging are complex, cost is not low, picture resolution is reduced by half, and full high definition is difficult to achieve; 4) the naked eye type 3D technology can ensure the resolution and the light transmittance, the existing design framework is not influenced, the 3D display effect is excellent, the technology is still developed, and the product is immature; 5) holography is a perfect 3D technology in theory, and 3D images with different angles can be obtained when the images are viewed from different angles. The hologram can be applied to nondestructive industrial inspection, ultrasonic holography, holographic microscopy, holographic memory, holographic film and television. However, due to the complexity of the technology, holography has not been commercially applied in the above-mentioned fields. In the embodiment of the present invention, any one or more of the above 3D synthesis techniques may be used to perform three-dimensional synthesis processing on the acquired playing video data.
The playing video data comprises recording video data corresponding to playing visual angle parameters and recording video data corresponding to recording visual angle parameters adjacent to the playing visual angle parameters.
In the specific implementation, the server performs data conversion on the video recording devices on two sides of the selected visual angle and the multiple paths of video streams of the video recording devices with the selected visual angle by adopting a three-dimensional synthesis processing technology to obtain parallel data of the multiple paths of video streams, performs 3D synthesis on the parallel data, stores the 3D synthesized data, generates a first storage address, and provides the first storage address for the video playing terminal. The data conversion and 3D synthesis of the multiple video streams are performed on each frame of image data, that is, the data conversion and 3D synthesis are performed on the current image data corresponding to the playing angle of view parameter and the current image data corresponding to the angle of view parameters on the left and right sides of the playing angle of view parameter, and then the 3D synthesis processing is performed on the next frame of image data in the same manner, and the processing is sequentially performed until all the playing video data are processed.
S209, the video playing terminal plays the played video data after the three-dimensional synthesis processing based on the first storage address.
In the specific implementation, the video playing terminal calls the playing video data corresponding to the first storage address, and loads and displays the playing video data by adopting an image display technology and an audio playing technology.
S210, when the video playing terminal detects a view angle parameter switching signal input aiming at the recording view angle parameter set, the video playing terminal acquires a view angle switching parameter carried by the view angle parameter switching signal and sends a video switching request carrying the view angle switching parameter to the server;
it can be understood that the view switching parameter refers to one view parameter selected from a set of recording view parameters which can be played by a user on a display interface and are specific to a target interface, if the view switching parameter is the same as a playing view, a target program is continuously played at a current view, and if the view switching parameter is different from the playing view parameter, a view parameter switching signal is generated.
In the specific implementation, when a user performs a touch operation on a video playing interface of the video playing terminal, inputs voice information through a receiver of the video playing terminal, inputs a gesture action through a camera of the video playing terminal, and the like, the video playing terminal identifies the received operation, identifies the operation as a viewing angle parameter switching signal for a target program, analyzes the signal, extracts information such as a viewing angle switching parameter carried by the signal, generates a video switching request based on the extracted information, and sends the generated request to a server.
Optionally, before generating the video switching request, it is determined whether the view angle switching parameter is the same as the playing view angle parameter, and when the view angle switching parameter is different from the playing view angle parameter, a video switching request carrying the switching view angle parameter is generated.
S211, the server receives the video switching request, acquires the view switching parameter in the video switching request, and acquires switching video data based on the view switching parameter;
it can be understood that the switching video data includes recorded video data corresponding to the view switching parameter and recorded video data corresponding to the recording view parameters adjacent to the left and right of the view switching parameter.
In the specific implementation, after receiving a video switching request sent by a video playing terminal, a server analyzes the request, extracts a view switching parameter carried by the request, simultaneously receives a recorded video data set corresponding to a target program acquired by a video recording terminal in real time, and then selects switching video data corresponding to the view switching parameter from the recorded video data set. Or after the view switching parameter carried by the request is extracted, searching a recorded video data set of the target program in a cached database, and then selecting switching video data corresponding to the view switching parameter from the recorded video data set.
Optionally, after the server receives a recorded video data set corresponding to the program identifier acquired by the video recording terminal in real time, the server converts the recorded video data set into a code stream data set for storage, so as to adapt to different network bandwidths, different terminal processing capabilities and different user requirements.
S212, the server performs three-dimensional synthesis processing on the switching video data, acquires a second storage address of the switching video data after the three-dimensional synthesis processing, and sends the second storage address to the video playing terminal;
in the specific implementation, the server performs data conversion on the video recording terminals on the two sides of the view switching parameter and the multiple paths of video streams of the video recording terminals corresponding to the view switching parameter by adopting a three-dimensional synthesis processing technology to obtain parallel data of the multiple paths of video streams, performs 3D synthesis on the parallel data, stores the 3D synthesized data, generates a second storage address, and simultaneously provides the second storage address for the video playing terminal. The data conversion and 3D synthesis of the multiple video streams are performed on each frame of image data, that is, the current image data corresponding to the view switching parameter and the current image data corresponding to the view parameters on the left and right sides of the view switching parameter are subjected to data conversion and 3D synthesis, and then the next frame of image data is subjected to 3D synthesis processing in the same manner, and the processing is sequentially performed until all the switched video data are processed.
Note that the three-dimensional synthesis processing of the switching video data may employ a 3D synthesis technique of any one or a combination of S208.
S213, the video playing terminal plays the switching video data after the three-dimensional synthesis processing based on the second storage address.
In specific implementation, the video playing terminal calls the switching video data corresponding to the second storage address, and loads and displays the switching video data by adopting an image display technology and an audio playing technology.
In the embodiment of the invention, when a video playing terminal detects a three-dimensional video playing signal input aiming at a target program, the video playing terminal acquires a playing visual angle parameter carried by the three-dimensional video playing signal and a program identifier of the target program, and sends a video acquiring request carrying the program identifier and the playing visual angle parameter to a server, after receiving the request, the server acquires playing video data based on the program identifier and the playing visual angle parameter, performs three-dimensional synthesis processing on the playing video data, and generates a first storage address of the playing video data after the three-dimensional synthesis processing and provides the first storage address to the video playing terminal, so that the video playing terminal plays the playing video data after the three-dimensional synthesis processing based on the first storage address. The video playing terminal generates the request, the server inquires and plays video data based on the request, 3D synthesis processing is carried out, and then the video playing terminal plays the video data, so that the 3D video is played based on the visual angle selected by the user, the video playing mode is enriched, the interactivity of the video playing equipment and the user is enhanced, and the user experience is improved.
Referring to fig. 5, a flow chart of another video playing method according to an embodiment of the invention is shown. Describing on the video playback terminal side, as shown in fig. 5, the method of the embodiment of the present invention may include the following steps S401 to S403.
S401, when a three-dimensional video playing signal input aiming at a target program is detected, acquiring a playing view angle parameter carried by the three-dimensional video playing signal and a program identifier of the target program;
it is to be understood that the playing view angle parameter may include one of the view angle parameters playable by the user in the target program, and may also include a default view angle parameter when not selected by the user, that is, a set view angle parameter (e.g., a main view angle parameter). The playing view angle parameter is determined by a video recording terminal, that is, at least one video recording terminal is adopted to record in a recording site of a target program, each video recording terminal adopts one recording view angle parameter, and the recording view angle parameter is the playing view angle parameter after being sent to the video playing terminal. Optionally, the playing view parameter may be displayed by the video playing terminal, for example, in a list form.
The program identifier is used for uniquely identifying the target program, and may be the name of the target program, or the broadcast channel and broadcast time of the target program.
In the specific implementation, when a user performs a touch operation on a video playing interface of the video playing terminal, inputs voice information through a receiver of the video playing terminal, or inputs a gesture action through a camera of the video playing terminal, and the like, the video playing terminal identifies the received operation and identifies the operation as a three-dimensional video playing signal for a target program, and extracts information such as a playing view angle parameter carried by the signal and a program identifier of the target program by analyzing the signal. The three-dimensional video playing signal can be a three-dimensional video live broadcasting signal and can also be a three-dimensional video recording and playing signal.
It should be noted that the video playing terminal can simultaneously support a 3D playing mode and a 2D playing mode, and in a feasible implementation manner, the acquired three-dimensional video playing signal is determined by selecting a touch key of the 3D playing mode.
Where 3D is a concept relative to a 2D plane, with three dimensions: height, width, and depth. The world in which humans live is a three-dimensional space, and objects observed in the real world also have three dimensions. The 3D image is consistent with the scene of the habit of people in real life, and is more three-dimensional and vivid; meanwhile, the stereoscopic impression and the depth of field of the 3D image enable a viewer to have an immersive sensation, and the immersive sensation is strong; in addition, various strong visual impacts can be made by utilizing the characteristics of the 3D images, such as live sports events, live concert and various macro movie scenes, and the visual impact effect is strong.
S402, sending a video acquisition request carrying the program identifier and the playing view angle parameter to a server, so that the server acquires playing video data based on the program identifier and the playing view angle parameter, performs three-dimensional synthesis on the playing video data, and acquires a first storage address of the playing video data after the three-dimensional synthesis;
it is understood that, since the video data includes a plurality of frames of image data, the three-dimensional composition processing on the video data is actually three-dimensional composition processing on an image. The processing process of the existing three-dimensional image synthesis algorithm mainly comprises the steps of image cutting, optimal illumination surface searching, light source direction estimation and image synthesis. Currently, there are several common 3D synthesis techniques: 1) the color difference type 3D stereo imaging has low technical difficulty and low cost, but the 3D image quality effect is not ideal, and the image and the picture edge are easy to color cast; 2) the shutter type 3D technology has relatively more resources, great propaganda and promotion force of manufacturers, excellent 3D effect and high price of shutter glasses; 3) the polarized light 3D technology has the advantages that polarized light glasses are low in price, excellent in 3D effect and large in market share, but installation and debugging are complex, cost is not low, picture resolution is reduced by half, and full high definition is difficult to achieve; 4) the naked eye type 3D technology can ensure the resolution and the light transmittance, the existing design framework is not influenced, the 3D display effect is excellent, the technology is still developed, and the product is immature; 5) holography is a perfect 3D technology in theory, and 3D images with different angles can be obtained when the images are viewed from different angles. The hologram can be applied to nondestructive industrial inspection, ultrasonic holography, holographic microscopy, holographic memory, holographic film and television. However, due to the complexity of the technology, holography has not been commercially applied in the above-mentioned fields. In the embodiment of the present invention, any one or more of the above 3D synthesis techniques may be used to perform three-dimensional synthesis processing on the acquired playing video data.
The playing video data comprises recording video data corresponding to playing visual angle parameters and recording video data corresponding to recording visual angle parameters adjacent to the playing visual angle parameters.
In the specific implementation, the video playing terminal generates a video acquisition request based on the extracted information, and sends the generated request to the server, so that the server performs data conversion on the video recording terminals on two sides of the selected view angle and the multiple paths of video streams of the video recording terminals of the selected view angle by adopting a three-dimensional synthesis processing technology to obtain parallel data of the multiple paths of video streams, performs 3D synthesis on the parallel data, stores the 3D synthesized data, generates a first storage address, and simultaneously provides the first storage address for the video playing terminal. The data conversion and 3D synthesis of the multiple video streams are performed on each frame of image data, that is, the data conversion and 3D synthesis are performed on the current image data corresponding to the playing angle of view parameter and the current image data corresponding to the angle of view parameters on the left and right sides of the playing angle of view parameter, and then the 3D synthesis processing is performed on the next frame of image data in the same manner, and the processing is sequentially performed until all the playing video data are processed.
And S403, playing the played video data after the three-dimensional synthesis processing based on the first storage address.
In the specific implementation, the video playing terminal calls the playing video data corresponding to the first storage address, and loads and displays the playing video data by adopting an image display technology and an audio playing technology.
In the embodiment of the invention, when a video playing terminal detects a three-dimensional video playing signal input aiming at a target program, a playing visual angle parameter carried by the three-dimensional video playing signal and a program identifier of the target program are obtained, and a video obtaining request carrying the program identifier and the playing visual angle parameter is sent to a server, so that the server obtains playing video data based on the program identifier and the playing visual angle parameter, obtains a first storage address of the playing video data after three-dimensional synthesis processing after the playing video data is subjected to three-dimensional synthesis processing, and then plays the playing video data after the three-dimensional synthesis processing based on the first storage address. The video acquisition request is generated based on the operation of the user and then sent to the server, and the video is played based on the processing result of the server, so that the 3D video is played based on the visual angle selected by the user, the video playing form is enriched, the interactivity between the video playing equipment and the user is enhanced, and the user experience is improved.
Referring to fig. 6, a flow chart of another video playing method according to an embodiment of the present invention is shown. Describing on the video playback terminal side, as shown in fig. 6, the method of the embodiment of the present invention may include the following steps S501 to S506.
S501, when a three-dimensional video playing signal input aiming at a target program is detected, obtaining a playing view angle parameter carried by the three-dimensional video playing signal and a program identifier of the target program;
it is to be understood that the playing view parameter may include one of the view parameters playable by the user in the target program, and may also include a default view parameter when not selected by the user, that is, a set view parameter (e.g., a main view parameter). The playing view angle parameter is determined by a video recording terminal, that is, at least one video recording terminal is adopted to record in a recording site of a target program, each video recording terminal adopts one recording view angle parameter, and the recording view angle parameter is the playing view angle parameter after being sent to the video playing terminal. For example, as shown in fig. 3, a target program recording site includes 7 video recording terminals (cameras), and each camera has one recording view angle parameter, that is, there are 7 selectable playing view angle parameters for the target program, where the recording view angle parameter corresponding to the camera No. 4 located right in front of the 7 cameras can be used as the default view angle parameter.
The program identifier is used for uniquely identifying the target program, and may be the name of the target program, or the broadcast channel and broadcast time of the target program.
In the specific implementation, when a user performs a touch operation on a video playing interface of the video playing terminal, inputs voice information through a receiver of the video playing terminal, or inputs a gesture action through a camera of the video playing terminal, and the like, the video playing terminal identifies the received operation and identifies the operation as a three-dimensional video playing signal for a target program, and extracts information such as a playing view angle parameter carried by the signal and a program identifier of the target program by analyzing the signal. The three-dimensional video playing signal can be a three-dimensional video live broadcasting signal and can also be a three-dimensional video recording and playing signal.
It should be noted that the video playing terminal can simultaneously support a 3D playing mode and a 2D playing mode, and in a feasible implementation manner, the acquired three-dimensional video playing signal is determined by selecting a touch key of the 3D playing mode.
Where 3D is a concept relative to a 2D plane, with three dimensions: height, width, and depth. The world in which humans live is a three-dimensional space, and objects observed in the real world also have three dimensions. The 3D image is consistent with the scene of the habit of people in real life, and is more three-dimensional and vivid; meanwhile, the stereoscopic impression and the depth of field of the 3D image enable a viewer to have an immersive sensation, and the immersive sensation is strong; in addition, various strong visual impacts can be made by utilizing the characteristics of the 3D images, such as live sports events, live concert and various macro movie scenes, and the visual impact effect is strong.
S502, sending a video acquisition request carrying the program identifier and the playing view angle parameter to a server, so that the server acquires playing video data based on the program identifier and the playing view angle parameter, performs three-dimensional synthesis on the playing video data, and acquires a first storage address of the playing video data after the three-dimensional synthesis;
it is understood that, since the video data includes a plurality of frames of image data, the three-dimensional composition processing of the video data is actually three-dimensional composition processing of an image. The processing process of the existing three-dimensional image synthesis algorithm mainly comprises the steps of image cutting, optimal illumination surface searching, light source direction estimation and image synthesis. Currently, there are several common 3D synthesis techniques: 1) the color difference type 3D stereo imaging has low technical difficulty and low cost, but the 3D image quality effect is not ideal, and the image and the picture edge are easy to color cast; 2) the shutter type 3D technology has relatively more resources, great propaganda and promotion force of manufacturers, excellent 3D effect and high price of shutter glasses; 3) the polarized light 3D technology has the advantages that polarized light glasses are low in price, excellent in 3D effect and large in market share, but installation and debugging are complex, cost is not low, picture resolution is reduced by half, and full high definition is difficult to achieve; 4) the naked eye type 3D technology can ensure the resolution and the light transmittance, the existing design framework is not influenced, the 3D display effect is excellent, the technology is still developed, and the product is immature; 5) holography is a perfect 3D technology in theory, and 3D images with different angles can be obtained when the images are viewed from different angles. The hologram can be applied to nondestructive industrial inspection, ultrasonic holography, holographic microscopy, holographic memory, holographic film and television. However, due to the complexity of the technology, holography has not been commercially applied in the above-mentioned fields. In the embodiment of the present invention, any one or more of the above 3D synthesis techniques may be used to perform three-dimensional synthesis processing on the acquired playing video data.
The playing video data comprises recording video data corresponding to playing visual angle parameters and recording video data corresponding to recording visual angle parameters adjacent to the playing visual angle parameters.
In the specific implementation, the video playing terminal generates a video acquisition request based on the extracted information, and sends the generated request to the server, so that the server performs data conversion on the video recording terminals at two sides of the selected visual angle carried in the request and the multiple paths of video streams of the video recording terminals at the selected visual angle by adopting a three-dimensional synthesis processing technology to obtain parallel data of the multiple paths of video streams, performs 3D synthesis on the parallel data, stores the 3D synthesized data, generates a first storage address, and simultaneously provides the first storage address for the video playing terminal. The data conversion and 3D synthesis of the multiple video streams are performed on each frame of image data, that is, the data conversion and 3D synthesis are performed on the current image data corresponding to the playing angle of view parameter and the current image data corresponding to the angle of view parameters on the left and right sides of the playing angle of view parameter, and then the 3D synthesis processing is performed on the next frame of image data in the same manner, and the processing is sequentially performed until all the playing video data are processed.
S503, playing the playing video data after the three-dimensional synthesis processing based on the first storage address;
in specific implementation, the video playing terminal calls the playing video data corresponding to the first storage address, and loads and displays the playing video data by adopting an image display technology and an audio playing technology.
S504, when detecting a view angle parameter switching signal input for a recording view angle parameter set sent by the server, obtaining a view angle switching parameter carried by the view angle parameter switching signal;
it can be understood that the view switching parameter refers to one view parameter selected from a set of recording view parameters that can be played by a user on a display interface for a target interface, if the view switching parameter is the same as a playing view, the target program is continuously played at the current view, and if the view switching parameter is different from the playing view parameter, a view parameter switching signal is generated.
In the specific implementation, when a user performs a touch operation on a video playing interface of the video playing terminal, inputs voice information through a receiver of the video playing terminal, inputs a gesture action through a camera of the video playing terminal, and the like, the video playing terminal identifies the received operation, identifies the operation as a viewing angle parameter switching signal for a target program, analyzes the signal, extracts information such as a viewing angle switching parameter carried by the signal, generates a video switching request based on the extracted information, and sends the generated request to a server.
Optionally, before generating the video switching request, it is determined whether the view angle switching parameter is the same as the playing view angle parameter, and when the view angle switching parameter is different from the playing view angle parameter, a video switching request carrying the switching view angle parameter is generated.
S505, sending a video switching request carrying the view switching parameter to the server, so that the server obtains switching video data based on the view switching parameter, and obtains a second storage address of the switching video data after three-dimensional synthesis processing after the switching video data is subjected to three-dimensional synthesis processing;
it can be understood that the switching video data includes recorded video data corresponding to the view switching parameter and recorded video data corresponding to the recording view parameters adjacent to the left and right of the view switching parameter.
In the specific implementation, the video playing terminal generates a video switching request based on the extracted information, and sends the generated request to the server, so that the server performs data conversion on multiple paths of video streams of the video recording terminals on two sides of the selected visual angle switching parameter and the video recording terminal corresponding to the visual angle switching parameter by adopting a three-dimensional synthesis processing technology to obtain parallel data of the multiple paths of video streams, performs 3D synthesis on the parallel data, stores the 3D synthesized data, generates a second storage address, and simultaneously provides the second storage address for the video playing terminal. The data conversion and 3D synthesis of the multiple video streams are performed on each frame of image data, that is, the current image data corresponding to the view switching parameter and the current image data corresponding to the view parameters on the left and right sides of the view switching parameter are subjected to data conversion and 3D synthesis, and then the next frame of image data is subjected to 3D synthesis processing in the same manner, and the processing is sequentially performed until all the switched video data are processed.
It should be noted that the three-dimensional synthesis process for switching video data may use any one or several of the above-mentioned 3D synthesis techniques in combination.
S506, playing the switching video data after the three-dimensional synthesis processing based on the second storage address.
In specific implementation, the video playing terminal calls the switching video data corresponding to the second storage address, and loads and displays the switching video data by adopting an image display technology and an audio playing technology.
In the embodiment of the invention, when a video playing terminal detects a three-dimensional video playing signal input aiming at a target program, a playing visual angle parameter carried by the three-dimensional video playing signal and a program identifier of the target program are obtained, and a video obtaining request carrying the program identifier and the playing visual angle parameter is sent to a server, so that the server obtains playing video data based on the program identifier and the playing visual angle parameter, obtains a first storage address of the playing video data after three-dimensional synthesis processing after the playing video data is subjected to three-dimensional synthesis processing, and then plays the playing video data after the three-dimensional synthesis processing based on the first storage address. The video acquisition request is generated based on the operation of the user and then sent to the server, and the video is played based on the processing result of the server, so that the 3D video is played based on the visual angle selected by the user, the video playing form is enriched, the interactivity of the video playing equipment and the user is enhanced, and the user experience is improved.
Referring to fig. 7, a flow chart of another video playing method according to an embodiment of the invention is shown. Describing on the server side, as shown in fig. 7, the method according to the embodiment of the present invention may include the following steps S601 to S603.
S601, receiving a video acquisition request sent by a video playing terminal when detecting a three-dimensional video playing signal input aiming at a target program, and acquiring a program identifier and a playing view angle parameter of the target program in the video acquisition request;
it is to be understood that the playing view angle parameter may include one of the view angle parameters playable by the user in the target program, and may also include a default view angle parameter when not selected by the user, that is, a set view angle parameter (e.g., a main view angle parameter). The playing view angle parameter is determined by a video recording terminal, that is, at least one video recording terminal is adopted to record in a recording site of a target program, each video recording terminal adopts one recording view angle parameter, and the recording view angle parameter is the playing view angle parameter after being sent to the video playing terminal. Optionally, the playing view parameter may be displayed by the video playing terminal, for example, in a list form.
The program identifier is used for uniquely identifying the target program, and may be the name of the target program, or the broadcast channel and broadcast time of the target program.
In the specific implementation, when a user performs a touch operation on a video playing interface of the video playing terminal, or inputs voice information through a receiver of the video playing terminal, or inputs a gesture action through a camera of the video playing terminal, or the like, the video playing terminal is triggered to generate a video acquiring request carrying a playing view angle parameter and a program identifier of a target program and send the video acquiring request to a server, and the server analyzes the request after receiving the request and extracts the program identifier and the playing view angle parameter carried by the request.
S602, acquiring playing video data based on the program identifier and the playing view angle parameter;
it is understood that the video data source for playing the video data as the target program may be a video stream or a code stream.
In the specific implementation, after receiving a video acquisition request sent by a video playing terminal, a server analyzes the request, extracts a program identifier and a playing view angle parameter carried by the request, simultaneously receives a recorded video data set corresponding to the program identifier acquired by a video recording terminal in real time, and then selects playing video data corresponding to the playing view angle parameter from the recorded video data set. Or after extracting the program identifier and the playing view angle parameter carried by the request, searching a recorded video data set corresponding to the program identifier in a cached database, and then selecting the playing video data corresponding to the playing view angle parameter from the recorded video data set.
S603, performing three-dimensional synthesis processing on the played video data, acquiring a first storage address of the played video data after the three-dimensional synthesis processing, and sending the first storage address to the video playing terminal, so that the video playing terminal plays the played video data after the three-dimensional synthesis processing based on the first storage address.
It is understood that, since the video data includes a plurality of frames of image data, the three-dimensional composition processing of the video data is actually three-dimensional composition processing of an image. The processing process of the existing three-dimensional image synthesis algorithm mainly comprises the steps of image cutting, optimal illumination surface searching, light source direction estimation and image synthesis. Currently, there are several common 3D synthesis techniques: 1) the color difference type 3D stereo imaging has low technical difficulty and low cost, but the 3D image quality effect is not ideal, and the image and the picture edge are easy to color cast; 2) the shutter type 3D technology has relatively more resources, great propaganda and promotion force of manufacturers, excellent 3D effect and high price of shutter glasses; 3) the polarized light 3D technology has the advantages that polarized light glasses are low in price, excellent in 3D effect and large in market share, but installation and debugging are complex, cost is not low, picture resolution is reduced by half, and full high definition is difficult to achieve; 4) the naked eye type 3D technology can ensure the resolution and the light transmittance, the existing design framework is not influenced, the 3D display effect is excellent, the technology is still developed, and the product is immature; 5) holography is a perfect 3D technology in theory, and 3D images with different angles can be obtained when the images are viewed from different angles. The hologram can be applied to nondestructive industrial inspection, ultrasonic holography, holographic microscopy, holographic memory, holographic film and television. However, due to the complexity of the technology, holography has not been commercially applied in the above-mentioned fields. In the embodiment of the present invention, any one or more of the above 3D synthesis techniques may be used to perform three-dimensional synthesis processing on the acquired playing video data.
The playing video data comprises recording video data corresponding to playing visual angle parameters and recording video data corresponding to recording visual angle parameters adjacent to the playing visual angle parameters.
In the specific implementation, the server performs data conversion on the video recording devices on two sides of the selected visual angle and the multiple paths of video streams of the video recording devices with the selected visual angle by adopting a three-dimensional synthesis processing technology to obtain parallel data of the multiple paths of video streams, performs 3D synthesis on the parallel data, stores the 3D synthesized data, generates a first storage address, and provides the first storage address for the video playing terminal. The data conversion and 3D synthesis of the multiple video streams are performed on each frame of image data, that is, the data conversion and 3D synthesis are performed on the current image data corresponding to the play angle parameter and the current image data corresponding to the angle parameters on the left and right sides of the play angle parameter, and then the 3D synthesis processing is performed on the next frame of image data in the same manner, and the processing is sequentially performed until all the play video data are processed.
In the embodiment of the invention, a server receives a video acquisition request sent by a video playing terminal when detecting a three-dimensional video playing signal input aiming at a target program, acquires a program identifier and a playing angle of view parameter of the target program in the video acquisition request, then acquires playing video data based on the program identifier and the playing angle of view parameter, performs three-dimensional synthesis processing on the playing video data, acquires a first storage address of the playing video data after the three-dimensional synthesis processing, and sends the first storage address to the video playing terminal so that the video playing terminal plays the playing video data after the three-dimensional synthesis processing based on the first storage address. The video playing terminal sends a request to inquire playing video data and perform 3D synthesis processing, and sends the first storage address after the synthesis processing to the video playing terminal to complete video playing, so that the 3D video is played based on the visual angle selected by a user, and the video playing form is enriched.
Referring to fig. 8, a schematic flow chart of another video playing method according to an embodiment of the present invention is provided. Describing on the server side, as shown in fig. 8, the method according to the embodiment of the present invention may include the following steps S701 to S706.
S701, receiving a video acquisition request sent by a video playing terminal when detecting a three-dimensional video playing signal input aiming at a target program, and acquiring a program identifier and a playing view angle parameter of the target program in the video acquisition request;
it is to be understood that the playing view parameter may include one of the view parameters playable by the user in the target program, and may also include a default view parameter when not selected by the user, that is, a set view parameter (e.g., a main view parameter). The playing view angle parameter is determined by a video recording terminal, that is, at least one video recording terminal is adopted to record in a recording site of a target program, each video recording terminal adopts one recording view angle parameter, and the recording view angle parameter is the playing view angle parameter after being sent to the video playing terminal. For example, as shown in fig. 3, a target program recording site includes 7 video recording terminals (cameras), and each camera has one recording view angle parameter, that is, there are 7 selectable playing view angle parameters for the target program, where the recording view angle parameter corresponding to the camera No. 4 located right in front of the 7 cameras can be used as the default view angle parameter.
The program identifier is used for uniquely identifying the target program, and may be the name of the target program, or the broadcast channel and broadcast time of the target program.
In specific implementation, when a user performs touch operation on a video playing interface of the video playing terminal, or inputs voice information through a receiver of the video playing terminal, or inputs gesture actions through a camera of the video playing terminal, or the like, the video playing terminal is triggered to generate a video acquisition request carrying information such as playing view angle parameters and program identifiers of target programs, and the video acquisition request is sent to a server. After receiving the request, the server analyzes the request and extracts the program identifier and the playing view angle parameter carried by the request.
S702, acquiring playing video data based on the program identification and the playing view angle parameter;
in a possible implementation manner, as shown in fig. 9, the acquiring playing video data based on the program identifier and the playing perspective parameter includes:
s801, receiving a recorded video data set corresponding to the program identification acquired by a video recording terminal in real time;
it can be understood that the video recording terminal collects and records video data in real time at a program recording site, and sends the collected data to the server for storage in real time. That is, the recorded video data is live data.
In the specific implementation, after receiving a video acquisition request sent by a video playing terminal, a server analyzes the request, extracts a program identifier and a playing view angle parameter carried by the request, and searches a recorded video data set corresponding to the program identifier in the received recorded video data, or requests a real-time recorded video data set from the video recording terminal corresponding to the program identifier.
For example, if the target program is identified as "news simulcast", the server searches for live data of "news simulcast" from the received multiple live video data, or directly requests the live data from the video recording terminal of "news simulcast".
S802, selecting the playing video data corresponding to the playing visual angle parameter from the recorded video data set.
Further, as shown in fig. 10, the selecting the playing video data corresponding to the playing view parameter from the recorded video data set includes:
s901, obtaining recording view angle parameters corresponding to each recording video data in the recording video data set to generate a recording view angle parameter set;
for example, the recording view parameters corresponding to cameras No. 1, 2, 3 …, and 7 shown in fig. 3 are a1, a2, A3, …, and a7, respectively, and then the recording view parameter set includes these 7 recording view parameters. It should be noted that each recording view parameter corresponds to a camera identifier. The camera identification may include a camera model, number, LOGO, etc.
S902, sending the recording view parameter set to the video playing terminal so that the video playing terminal receives and displays the recording view parameter set;
it can be understood that, the video playing terminal may display each recording view parameter in the received recording view parameter set by using a set display rule, so that the user may select a view to be watched from the displayed recording view parameters. As shown in tabular form.
S903, acquiring a first recording view angle parameter matched with the playing view angle parameter and a second recording view angle parameter left and right adjacent to the first recording view angle parameter from the recording view angle parameter set;
in the specific implementation, the server traverses each recording view parameter in the recording view parameter set, matches the currently traversed recording view parameter with the playing view parameter, if the matching is successful, the currently traversed recording view parameter is used as a first recording view parameter, if the matching is failed, the next recording view parameter is continuously traversed, and the matching is performed again until the successfully matched recording view parameter is found. And meanwhile, taking the recording visual angle parameters adjacent to the left and right sides of the first recording visual angle parameter as second recording visual angle parameters. For example, as shown in fig. 3, if the selected playing angle of view parameter is a2 angle corresponding to camera No. 2, then the recording angle of view parameters a1 and A3 corresponding to camera No. 1 and camera No. 3 are used as the second recording angle of view parameter.
The server can analyze each recording visual angle parameter in the recording visual angle parameter set so as to match left and right adjacent visual angle parameters of the current visual angle parameter, and cache the parameters, so that the parameters can be quickly acquired in the cache when searching for the second recording visual angle parameter.
For example, for the cameras in fig. 3, by analyzing the view angle parameter of each camera, the position relationship of the recording view angle parameters shown in table 1 can be matched, where the current view angle parameter corresponds to the first recording view angle parameter, and the left view angle parameter and the right view angle parameter correspond to the second recording view angle parameter.
And S904, acquiring target recorded video data corresponding to the first recorded view parameter and the second recorded view parameter from the recorded video data set, and taking the target recorded video data as playing video data.
It can be understood that, since the recorded video data corresponds to the recorded view angle parameters one to one, the corresponding recorded video data, i.e., the played video data, can be quickly acquired based on the found first recorded view angle parameter and the second recorded view angle parameter.
In another possible implementation manner, as shown in fig. 11, after receiving a recorded video data set corresponding to the program identifier collected by the video recording terminal in real time, the method further includes:
s1001, converting the recorded video data set into a code stream data set for storage;
it can be understood that converting the recorded video data set into the stream data set means converting the video stream that has been compressed and encoded into another video stream to adapt to different network bandwidths, different terminal processing capabilities, and different user requirements. Transcoding is essentially a process of decoding first and then encoding, so the code stream before and after conversion may or may not conform to the same video encoding standard. As shown in fig. 3, after the recorded video data is received by the server, the recorded video data is stored in the database DB by video transcoding.
The selecting the playing video data corresponding to the playing view angle parameter from the recorded video data set includes:
s1002, selecting the playing code stream data corresponding to the playing view angle parameter from the code stream data set.
It can be understood that after the recorded video data is transcoded and stored, the video source is changed into a code stream data set of another form, and then the code stream data corresponding to the first recording view angle parameter and the second recording view angle parameter is searched in the code stream data set, and the searched code stream data is used as playing code stream data.
S703, performing three-dimensional synthesis processing on the played video data, acquiring a first storage address of the played video data after the three-dimensional synthesis processing, and sending the first storage address to the video playing terminal, so that the video playing terminal plays the played video data after the three-dimensional synthesis processing based on the first storage address;
it is understood that, since the video data includes a plurality of frames of image data, the three-dimensional composition processing of the video data is actually three-dimensional composition processing of an image. The processing process of the existing three-dimensional image synthesis algorithm mainly comprises the steps of image cutting, optimal illumination surface searching, light source direction estimation and image synthesis. Currently, there are several common 3D synthesis techniques: 1) the color difference type 3D stereo imaging has low technical difficulty and low cost, but the 3D image quality effect is not ideal, and the image and the picture edge are easy to color cast; 2) the shutter type 3D technology has relatively more resources, great propaganda and promotion force of manufacturers, excellent 3D effect and high price of shutter glasses; 3) the polarized light 3D technology has the advantages that polarized light glasses are low in price, excellent in 3D effect and large in market share, but installation and debugging are complex, cost is not low, picture resolution is reduced by half, and full high definition is difficult to achieve; 4) the naked eye type 3D technology can ensure the resolution and the light transmittance, the existing design framework is not influenced, the 3D display effect is excellent, the technology is still developed, and the product is immature; 5) holography is a perfect 3D technology in theory, and 3D images with different angles can be obtained when the images are viewed from different angles. The hologram can be applied to nondestructive industrial inspection, ultrasonic holography, holographic microscopy, holographic memory, holographic film and television. However, due to the complexity of the technology, holography has not been commercially applied in the above-mentioned fields. In the embodiment of the present invention, any one or more of the above 3D synthesis techniques may be adopted to perform three-dimensional synthesis processing on the acquired playing video data.
The playing video data comprises recording video data corresponding to playing visual angle parameters and recording video data corresponding to recording visual angle parameters adjacent to the playing visual angle parameters.
In the specific implementation, the server performs data conversion on the video recording devices on two sides of the selected visual angle and the multiple paths of video streams of the video recording devices with the selected visual angle by adopting a three-dimensional synthesis processing technology to obtain parallel data of the multiple paths of video streams, performs 3D synthesis on the parallel data, stores the 3D synthesized data, generates a first storage address, and provides the first storage address for the video playing terminal. The data conversion and 3D synthesis of the multiple video streams are performed on each frame of image data, that is, the data conversion and 3D synthesis are performed on the current image data corresponding to the playing angle of view parameter and the current image data corresponding to the angle of view parameters on the left and right sides of the playing angle of view parameter, and then the 3D synthesis processing is performed on the next frame of image data in the same manner, and the processing is sequentially performed until all the playing video data are processed.
S704, receiving a video switching request sent by the video playback terminal when detecting a view parameter switching signal input for the recording view parameter set, and obtaining a view switching parameter in the video switching request;
it can be understood that the view switching parameter refers to one view parameter selected from a set of recording view parameters which can be played by a user on a display interface and are specific to a target interface, if the view switching parameter is the same as a playing view, a target program is continuously played at a current view, and if the view switching parameter is different from the playing view parameter, a view parameter switching signal is generated.
In specific implementation, when a user performs touch operation on a video playing interface of the video playing terminal, or inputs voice information through a receiver of the video playing terminal, or inputs gesture actions through a camera of the video playing terminal, or the like, the video playing terminal is triggered to generate a video switching request carrying information such as a view angle switching parameter, and the generated request is sent to a server. After receiving a video switching request sent by a video playing terminal, the server analyzes the request and extracts the view angle switching parameters carried by the request.
S705, acquiring switching video data based on the view switching parameter;
in the specific implementation, the server receives a recorded video data set corresponding to a target program acquired by the video recording terminal in real time, and then selects switching video data corresponding to the view switching parameter from the recorded video data set. Or after the view switching parameter carried by the request is extracted, searching a recorded video data set of the target program in a cached database, and then selecting switching video data corresponding to the view switching parameter from the recorded video data set.
Optionally, after the server receives a recorded video data set corresponding to the program identifier acquired by the video recording terminal in real time, the server converts the recorded video data set into a code stream data set for storage, so as to adapt to different network bandwidths, different terminal processing capabilities and different user requirements.
S706, performing three-dimensional synthesis processing on the switching video data, acquiring a second storage address of the switching video data after the three-dimensional synthesis processing, and sending the second storage address to the video playing terminal, so that the video playing terminal plays the switching video data after the three-dimensional synthesis processing based on the second storage address.
In the specific implementation, the server performs data conversion on the video recording terminals on the two sides of the view switching parameter and the multiple paths of video streams of the video recording terminals corresponding to the view switching parameter by adopting a three-dimensional synthesis processing technology to obtain parallel data of the multiple paths of video streams, performs 3D synthesis on the parallel data, stores the 3D synthesized data, generates a second storage address, and simultaneously provides the second storage address for the video playing terminal. The data conversion and 3D synthesis of the multiple video streams are performed on each frame of image data, that is, the current image data corresponding to the view switching parameter and the current image data corresponding to the view parameters on the left and right sides of the view switching parameter are subjected to data conversion and 3D synthesis, and then the next frame of image data is subjected to 3D synthesis processing in the same manner, and the processing is sequentially performed until all the switched video data are processed.
Note that the three-dimensional synthesis processing of the switching video data may employ any one or a combination of several 3D synthesis techniques in S703.
In the embodiment of the invention, a server receives a video acquisition request sent by a video playing terminal when detecting a three-dimensional video playing signal input aiming at a target program, acquires a program identifier and a playing angle of view parameter of the target program in the video acquisition request, then acquires playing video data based on the program identifier and the playing angle of view parameter, performs three-dimensional synthesis processing on the playing video data, acquires a first storage address of the playing video data after the three-dimensional synthesis processing, and sends the first storage address to the video playing terminal so that the video playing terminal plays the playing video data after the three-dimensional synthesis processing based on the first storage address. The video playing terminal sends a request to inquire playing video data and perform 3D synthesis processing, and sends the first storage address after the synthesis processing to the video playing terminal to complete video playing, so that the 3D video is played based on the visual angle selected by a user, and the video playing form is enriched.
The video playing system and the device thereof according to the embodiment of the present invention will be described in detail with reference to fig. 12 to 16. It should be noted that, the video playing system shown in fig. 12 to fig. 16 and the apparatus thereof are used for executing the method of the embodiment shown in fig. 1 to fig. 11 of the present invention, for convenience of description, only the portion related to the embodiment of the present invention is shown, and details of the specific technology are not disclosed, please refer to the embodiment shown in fig. 1 to fig. 11 of the present invention.
Fig. 12 is a schematic structural diagram of a video playing system according to an embodiment of the present invention. As shown in fig. 12, the video playing system according to the embodiment of the present invention may include: a video playing terminal 1 and a server 2.
The video playing terminal 1 is configured to, when a three-dimensional video playing signal input for a target program is detected, acquire a playing view angle parameter carried by the three-dimensional video playing signal and a program identifier of the target program, and send a video acquisition request carrying the program identifier and the playing view angle parameter to a server;
the server 2 is configured to receive the video acquisition request, acquire the program identifier and the playing view parameter in the video acquisition request, and acquire playing video data based on the program identifier and the playing view parameter;
the server 2 is further configured to perform three-dimensional synthesis processing on the played video data, acquire a first storage address of the played video data after the three-dimensional synthesis processing, and send the first storage address to the video playing terminal;
the video playing terminal 1 is further configured to play the played video data after the three-dimensional synthesis processing based on the first storage address.
Optionally, the server 2 is configured to obtain playing video data based on the program identifier and the playing view parameter, and specifically configured to:
receiving a recorded video data set corresponding to the program identification acquired by a video recording terminal in real time;
and selecting the playing video data corresponding to the playing visual angle parameter from the recorded video data set.
Optionally, the server 2 is configured to select, from the recorded video data set, the played video data corresponding to the play view parameter, and specifically configured to:
acquiring recording visual angle parameters corresponding to each recording video data in the recording video data set to generate a recording visual angle parameter set;
acquiring a first recording visual angle parameter matched with the playing visual angle parameter and a second recording visual angle parameter which is left-right adjacent to the first recording visual angle parameter from the recording visual angle parameter set;
and acquiring target recorded video data corresponding to the first recorded view angle parameter and the second recorded view angle parameter from the recorded video data set, and taking the target recorded video data as playing video data.
Optionally, the server 2 is configured to obtain recording view parameters corresponding to each piece of recorded video data in the recorded video data set, so as to generate a recorded view parameter set, and then:
sending the recording visual angle parameter set to the video playing terminal;
the video playing terminal is further used for receiving and displaying the recording visual angle parameter set.
Optionally, the video playing terminal 1 is further configured to, when detecting a view parameter switching signal input for the recording view parameter set, obtain a view switching parameter carried by the view parameter switching signal, and send a video switching request carrying the view switching parameter to the server;
the server 2 is further configured to receive the video switching request, acquire the view switching parameter in the video switching request, and acquire switched video data based on the view switching parameter;
the server 2 is further configured to perform three-dimensional synthesis processing on the switching video data, acquire a second storage address of the switching video data after the three-dimensional synthesis processing, and send the second storage address to the video playing terminal;
the video playing terminal 1 is further configured to play the switching video data after the three-dimensional synthesis processing based on the second storage address.
Optionally, the server 2 is further configured to, after receiving a recorded video data set corresponding to the program identifier acquired by the video recording terminal in real time, further:
converting the recorded video data set into a code stream data set for storage;
selecting the playing video data corresponding to the playing view angle parameter from the recorded video data set, wherein the selecting step comprises the following steps:
and selecting the playing code stream data corresponding to the playing visual angle parameter from the code stream data set.
In the embodiment of the invention, when a video playing terminal detects a three-dimensional video playing signal input aiming at a target program, a playing visual angle parameter carried by the three-dimensional video playing signal and a program identifier of the target program are obtained, and a video obtaining request carrying the program identifier and the playing visual angle parameter is sent to a server, so that the server obtains playing video data based on the program identifier and the playing visual angle parameter, obtains a first storage address of the playing video data after three-dimensional synthesis processing after the playing video data is subjected to three-dimensional synthesis processing, and then plays the playing video data after the three-dimensional synthesis processing based on the first storage address. The video acquisition request is generated based on the operation of the user and then sent to the server, and the video is played based on the processing result of the server, so that the 3D video is played based on the visual angle selected by the user, the video playing form is enriched, the interactivity of the video playing equipment and the user is enhanced, and the user experience is improved.
Fig. 13 is a schematic structural diagram of a video playing device according to an embodiment of the present invention. As shown in fig. 13, the video playback device 10 according to the embodiment of the present invention may include: an information acquisition unit 101, a request transmission unit 102, and a data playback unit 103.
The information acquisition unit 101 is configured to, when a three-dimensional video playing signal input for a target program is detected, acquire a playing view angle parameter carried by the three-dimensional video playing signal and a program identifier of the target program;
a request sending unit 102, configured to send a video obtaining request carrying the program identifier and the playing view parameter to a server, so that the server obtains playing video data based on the program identifier and the playing view parameter, performs three-dimensional synthesis on the playing video data, and obtains a first storage address of the playing video data after the three-dimensional synthesis;
a data playing unit 103, configured to play the played video data after the three-dimensional synthesis processing based on the first storage address.
Optionally, the information obtaining unit 101 is further configured to obtain a view switching parameter carried by the view parameter switching signal when detecting a view parameter switching signal input for a recording view parameter set sent by the server;
the request sending unit 102 is further configured to send a video switching request carrying the view switching parameter to the server, so that the server obtains switched video data based on the view switching parameter, and obtains a second storage address of the switched video data after three-dimensional synthesis processing after performing three-dimensional synthesis processing on the switched video data;
the data playing unit 103 is further configured to play the switching video data after the three-dimensional synthesis processing based on the second storage address.
In the embodiment of the invention, when a video playing terminal detects a three-dimensional video playing signal input aiming at a target program, the video playing terminal acquires a playing view angle parameter carried by the three-dimensional video playing signal and a program identifier of the target program, and sends a video acquiring request carrying the program identifier and the playing view angle parameter to a server, after receiving the request, the server acquires playing video data based on the program identifier and the playing view angle parameter, performs three-dimensional synthesis processing on the playing video data, and generates a first storage address of the playing video data after the three-dimensional synthesis processing and provides the first storage address for the video playing terminal, so that the video playing terminal plays the playing video data after the three-dimensional synthesis processing based on the first storage address. The video playing terminal generates the request, the server inquires and plays video data based on the request, 3D synthesis processing is carried out, and then the video playing terminal plays the video data, so that the 3D video is played based on the visual angle selected by the user, the video playing mode is enriched, the interactivity of the video playing equipment and the user is enhanced, and the user experience is improved.
Referring to fig. 14, a schematic structural diagram of another video playback device is provided in the embodiment of the present invention. As shown in fig. 14, the video playback device 20 according to the embodiment of the present invention may include: a parameter acquisition unit 201, a data acquisition unit 202, and an address acquisition unit 203.
A parameter obtaining unit 201, configured to receive a video obtaining request sent by a video playing terminal when detecting a three-dimensional video playing signal input for a target program, and obtain a program identifier and a playing view parameter of the target program in the video obtaining request;
a data obtaining unit 202, configured to obtain played video data based on the program identifier and the playing view parameter;
optionally, as shown in fig. 15, the data obtaining unit 202 includes:
the data set receiving subunit 2021 is configured to receive a recorded video data set corresponding to the program identifier, where the recorded video data set is acquired by the video recording terminal in real time;
a data selecting subunit 2022, configured to select, from the recorded video data set, playing video data corresponding to the playing view parameter.
Further, the data selecting subunit 2022 is specifically configured to:
acquiring recording visual angle parameters corresponding to each recording video data in the recording video data set to generate a recording visual angle parameter set;
sending the recording visual angle parameter set to the video playing terminal so that the video playing terminal receives and displays the recording visual angle parameter set;
acquiring a first recording visual angle parameter matched with the playing visual angle parameter and a second recording visual angle parameter which is left-right adjacent to the first recording visual angle parameter from the recording visual angle parameter set;
and acquiring target recorded video data corresponding to the first recorded view angle parameter and the second recorded view angle parameter from the recorded video data set, and taking the target recorded video data as playing video data.
Optionally, as shown in fig. 16, the step 202 further includes:
a data set storage subunit 2023, configured to convert the recorded video data set into a code stream data set for storage;
the data selecting subunit 2022 is specifically configured to select, from the code stream data set, playing code stream data corresponding to the playing view parameter.
An address obtaining unit 203, configured to perform three-dimensional synthesis processing on the played video data, obtain a first storage address of the played video data after the three-dimensional synthesis processing, and send the first storage address to the video playing terminal, so that the video playing terminal plays the played video data after the three-dimensional synthesis processing based on the first storage address.
In the embodiment of the invention, a server receives a video acquisition request sent by a video playing terminal when detecting a three-dimensional video playing signal input aiming at a target program, acquires a program identifier and a playing angle of view parameter of the target program in the video acquisition request, then acquires playing video data based on the program identifier and the playing angle of view parameter, performs three-dimensional synthesis processing on the playing video data, acquires a first storage address of the playing video data after the three-dimensional synthesis processing, and sends the first storage address to the video playing terminal so that the video playing terminal plays the playing video data after the three-dimensional synthesis processing based on the first storage address. The video playing terminal sends a request to inquire playing video data and perform 3D synthesis processing, and sends the first storage address after the synthesis processing to the video playing terminal to complete video playing, so that the 3D video is played based on the visual angle selected by a user, and the video playing form is enriched.
An embodiment of the present invention further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, where the instructions are suitable for being loaded by a processor and executing the method steps in the embodiments shown in fig. 1 to 11, and a specific execution process may refer to specific descriptions of the embodiments shown in fig. 1 to 11, which are not described herein again.
Fig. 17 is a schematic structural diagram of a video playback terminal according to an embodiment of the present invention. As shown in fig. 17, the video playback terminal 1000 can include: at least one processor 1001, e.g., CPU, at least one network interface 1004, a user interface 1003, memory 1005, at least one communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display) and a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a standard wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one disk memory. The memory 1005 may optionally be at least one memory device located remotely from the processor 1001. As shown in fig. 12, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a video playing application program.
In the video playback terminal 1000 shown in fig. 17, the user interface 1003 is mainly used as an interface for providing input for a user, and acquiring data input by the user; the network interface 1004 is mainly used for data communication with the user terminal; and the processor 1001 may be configured to call the video playing application stored in the memory 1005, and specifically perform the following operations:
when a three-dimensional video playing signal input aiming at a target program is detected, acquiring a playing visual angle parameter carried by the three-dimensional video playing signal and a program identifier of the target program;
sending a video acquisition request carrying the program identifier and the playing view angle parameter to a server, so that the server acquires playing video data based on the program identifier and the playing view angle parameter, performs three-dimensional synthesis processing on the playing video data, and acquires a first storage address of the playing video data after the three-dimensional synthesis processing;
and playing the playing video data after the three-dimensional synthesis processing based on the first storage address.
In one embodiment, the processor 1001 further performs the following operations:
when detecting a view angle parameter switching signal input for a recording view angle parameter set sent by the server, acquiring a view angle switching parameter carried by the view angle parameter switching signal;
sending a video switching request carrying the view switching parameter to the server so that the server acquires switched video data based on the view switching parameter, and acquiring a second storage address of the switched video data after three-dimensional synthesis processing after the switched video data is subjected to three-dimensional synthesis processing;
and playing the switching video data after the three-dimensional synthesis processing based on the second storage address.
In the embodiment of the invention, when a video playing terminal detects a three-dimensional video playing signal input aiming at a target program, a playing visual angle parameter carried by the three-dimensional video playing signal and a program identifier of the target program are obtained, and a video obtaining request carrying the program identifier and the playing visual angle parameter is sent to a server, so that the server obtains playing video data based on the program identifier and the playing visual angle parameter, obtains a first storage address of the playing video data after three-dimensional synthesis processing after the playing video data is subjected to three-dimensional synthesis processing, and then plays the playing video data after the three-dimensional synthesis processing based on the first storage address. The video acquisition request is generated based on the operation of the user and then sent to the server, and the video is played based on the processing result of the server, so that the 3D video is played based on the visual angle selected by the user, the video playing form is enriched, the interactivity of the video playing equipment and the user is enhanced, and the user experience is improved.
Fig. 18 is a schematic structural diagram of a server according to an embodiment of the present invention. As shown in fig. 18, the server 2000 may include: at least one processor 2001, e.g., a CPU, at least one network interface 2004, a user interface 2003, a memory 2005, at least one communication bus 2002. The communication bus 2002 is used to implement connection communication between these components. The user interface 2003 may include a Display (Display) and a Keyboard (Keyboard), and the optional user interface 2003 may further include a standard wired interface and a standard wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). Memory 2005 may be a high-speed RAM memory or a non-volatile memory (e.g., at least one disk memory). The memory 2005 may optionally also be at least one memory device located remotely from the aforementioned processor 2001. As shown in fig. 18, the memory 2005, which is one type of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a video playing application program.
In the server 2000 shown in fig. 18, the user interface 2003 is mainly used as an interface for providing input for a user, and acquiring data input by the user; the network interface 2004 is mainly used for data communication with the user terminal; the processor 2001 may be configured to call the video playing application stored in the memory 2005, and specifically perform the following operations:
receiving a video acquisition request sent by a video playing terminal when a three-dimensional video playing signal input aiming at a target program is detected, and acquiring a program identifier and a playing view angle parameter of the target program in the video acquisition request;
acquiring playing video data based on the program identification and the playing visual angle parameter;
and performing three-dimensional synthesis processing on the played video data, acquiring a first storage address of the played video data after the three-dimensional synthesis processing, and sending the first storage address to the video playing terminal, so that the video playing terminal plays the played video data after the three-dimensional synthesis processing based on the first storage address.
In an embodiment, when the processor 2001 executes obtaining playing video data based on the program identifier and the playing angle parameter, the following steps are specifically executed:
receiving a recorded video data set corresponding to the program identification acquired by a video recording terminal in real time;
and selecting the playing video data corresponding to the playing visual angle parameter from the recorded video data set.
In an embodiment, when the processor 2001 selects the playing video data corresponding to the playing view parameter from the recorded video data set, the following steps are specifically performed:
acquiring recording visual angle parameters corresponding to each recording video data in the recording video data set to generate a recording visual angle parameter set;
acquiring a first recording visual angle parameter matched with the playing visual angle parameter and a second recording visual angle parameter which is left-right adjacent to the first recording visual angle parameter from the recording visual angle parameter set;
and acquiring target recorded video data corresponding to the first recording visual angle parameter and the second recording visual angle parameter from the recorded video data set, and taking the target recorded video data as playing video data.
In an embodiment, after the processor 2001 performs the following steps of acquiring the recording view parameter corresponding to each recorded video data in the recorded video data set to generate the recording view parameter set:
and sending the recording visual angle parameter set to the video playing terminal so that the video playing terminal receives and displays the recording visual angle parameter set.
In one embodiment, the processor 2001 further performs the following steps:
receiving a video switching request sent by the video playing terminal when detecting a view parameter switching signal input aiming at the recording view parameter set, and acquiring a view switching parameter in the video switching request;
acquiring switching video data based on the view switching parameter;
and performing three-dimensional synthesis processing on the switching video data, acquiring a second storage address of the switching video data after the three-dimensional synthesis processing, and sending the second storage address to the video playing terminal so that the video playing terminal plays the switching video data after the three-dimensional synthesis processing based on the second storage address.
In one embodiment, the processor 2001 further performs the following steps after receiving the recorded video data set corresponding to the program identifier collected by the video recording terminal in real time:
converting the recorded video data set into a code stream data set for storage;
the selecting the playing video data corresponding to the playing view angle parameter from the recorded video data set includes:
and selecting the playing code stream data corresponding to the playing view angle parameter from the code stream data set.
In the embodiment of the invention, a server receives a video acquisition request sent by a video playing terminal when a three-dimensional video playing signal input for a target program is detected, acquires a program identifier and a playing view angle parameter of the target program in the video acquisition request, then acquires playing video data based on the program identifier and the playing view angle parameter, performs three-dimensional synthesis processing on the playing video data, acquires a first storage address of the playing video data after the three-dimensional synthesis processing, and sends the first storage address to the video playing terminal so that the video playing terminal plays the playing video data after the three-dimensional synthesis processing based on the first storage address. The video playing terminal sends a request to inquire playing video data and perform 3D synthesis processing, and sends the first storage address after the synthesis processing to the video playing terminal to complete video playing, so that the 3D video is played based on the visual angle selected by a user, and the video playing form is enriched.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (12)

1. A video playback method, comprising:
the video playing terminal identifies the received operation, and supports a three-dimensional playing mode and a two-dimensional playing mode; when the video playing terminal detects a three-dimensional video playing signal input aiming at a target program, the video playing terminal acquires playing view angle parameters carried by the three-dimensional video playing signal and a program identifier of the target program, and sends a video acquiring request carrying the program identifier and the playing view angle parameters to a server, wherein the playing view angle parameters comprise one selected view angle parameter in the playable recording view angle parameters of the target program displayed by the video playing terminal in a list form or comprise default view angle parameters when a user does not select the view angle parameters, and the playable recording view angle parameters of the target program refer to a plurality of recording view angle parameters respectively corresponding to a plurality of video recording terminals of the target program;
the server receives the video acquisition request, acquires the program identifier and the playing view angle parameter in the video acquisition request, acquires a first recording view angle parameter matched with the playing view angle parameter and a second recording view angle parameter adjacent to the first recording view angle parameter left and right, acquires target recorded video data corresponding to the first recording view angle parameter and the second recording view angle parameter, and takes the target recorded video data as playing video data, wherein the left and right adjacent view angle parameters corresponding to each recording view angle parameter are obtained in advance and cached, and the second recording view angle parameter is obtained from the cache;
the server converts the playing video data to obtain parallel data by adopting a three-dimensional synthesis processing technology, performs three-dimensional synthesis processing on the parallel data, acquires a first storage address of the playing video data after the three-dimensional synthesis processing, and sends the first storage address to the video playing terminal;
the video playing terminal plays the played video data after the three-dimensional synthesis processing based on the first storage address;
when the video playing terminal detects a view angle parameter switching signal, the video playing terminal acquires a view angle switching parameter carried by the view angle parameter switching signal, wherein the view angle switching parameter is one view angle parameter selected by a user from the playable recording view angle parameters of the target program; if the view switching parameter is different from the playing view parameter, generating a video switching request carrying the view switching parameter and sending the video switching request to the server;
the server receives the video switching request, and acquires recorded video data corresponding to the view switching parameter and recorded video data corresponding to the recorded view parameters adjacent to the left and right of the view switching parameter as switched video data;
the server carries out three-dimensional synthesis processing on the switching video data, acquires a second storage address of the switching video data after the three-dimensional synthesis processing, and sends the second storage address to the video playing terminal;
and the video playing terminal plays the switching video data after the three-dimensional synthesis processing based on the second storage address.
2. The method of claim 1, further comprising:
the server receives a recorded video data set corresponding to the program identification acquired by the video recording terminal in real time;
and the server selects the playing video data corresponding to the playing visual angle parameter from the recorded video data set.
3. The method according to claim 2, wherein the server selects the playing video data corresponding to the playing perspective parameter from the recorded video data set, and comprises:
the server acquires recording visual angle parameters corresponding to all the recorded video data in the recorded video data set so as to generate a recorded visual angle parameter set;
the server acquires a first recording visual angle parameter matched with the playing visual angle parameter and a second recording visual angle parameter which is adjacent to the first recording visual angle parameter left and right from the recording visual angle parameter set;
and the server acquires target recorded video data corresponding to the first recorded view angle parameter and the second recorded view angle parameter from the recorded video data set, and takes the target recorded video data as playing video data.
4. The method according to claim 3, wherein after the server obtains the recording view parameters corresponding to the respective recorded video data in the recorded video data set to generate the recorded view parameter set, the method further comprises:
the server sends the recording view angle parameter set to the video playing terminal;
and the video playing terminal receives and displays the recording visual angle parameter set.
5. The method according to claim 2, wherein after the server receives the recorded video data set corresponding to the program identifier collected by the video recording terminal in real time, the method further comprises:
the server converts the recorded video data set into a code stream data set for storage;
the server selects the playing video data corresponding to the playing view angle parameter from the recorded video data set, and the method comprises the following steps:
and the server selects the playing code stream data corresponding to the playing view angle parameter from the code stream data set.
6. A video playing system is characterized by comprising a video playing terminal and a server, wherein:
the video playing terminal is used for identifying the received operation by the video playing terminal, and the video playing terminal supports a three-dimensional playing mode and a two-dimensional playing mode; when a three-dimensional video playing signal input aiming at a target program is detected, acquiring playing view angle parameters carried by the three-dimensional video playing signal and a program identifier of the target program, and sending a video acquisition request carrying the program identifier and the playing view angle parameters to a server, wherein the playing view angle parameters comprise one selected view angle parameter in the playable recording view angle parameters of the target program displayed by the video playing terminal in a list form or comprise default view angle parameters when a user does not select the view angle parameters, and the playable recording view angle parameters of the target program refer to a plurality of recording view angle parameters respectively corresponding to a plurality of video recording terminals of the target program;
the server is used for receiving the video acquisition request, acquiring the program identifier and the playing view angle parameter in the video acquisition request, acquiring a first recording view angle parameter matched with the playing view angle parameter and a second recording view angle parameter adjacent to the first recording view angle parameter left and right, acquiring target recorded video data corresponding to the first recording view angle parameter and the second recording view angle parameter, and taking the target recorded video data as playing video data, wherein the left and right adjacent view angle parameters corresponding to each recording view angle parameter are obtained in advance and cached, and the second recording view angle parameter is obtained from the cache;
the server is further configured to convert the played video data to obtain parallel data by using a three-dimensional synthesis processing technology, perform three-dimensional synthesis processing on the parallel data, obtain a first storage address of the played video data after the three-dimensional synthesis processing, and send the first storage address to the video playing terminal;
the video playing terminal is further used for playing the played video data after the three-dimensional synthesis processing based on the first storage address;
the video playing terminal is further configured to acquire a view switching parameter carried by the view parameter switching signal when the view parameter switching signal is detected, where the view switching parameter is one of view parameters selected by a user for a playable recording view parameter of the target program; if the view switching parameter is different from the playing view parameter, generating a video switching request carrying the view switching parameter and sending the video switching request to the server;
the server is further configured to receive the video switching request, and acquire recorded video data corresponding to the view switching parameter and recorded video data corresponding to left and right adjacent recorded view parameters of the view switching parameter as switched video data; performing three-dimensional synthesis processing on the switching video data, acquiring a second storage address of the switching video data after the three-dimensional synthesis processing, and sending the second storage address to the video playing terminal;
and the video playing terminal is also used for playing the switching video data after the three-dimensional synthesis processing based on the second storage address.
7. The system of claim 6, wherein the server is specifically configured to:
receiving a recorded video data set corresponding to the program identification acquired by a video recording terminal in real time;
and selecting the playing video data corresponding to the playing visual angle parameter from the recorded video data set.
8. The system according to claim 7, wherein the server is configured to select, from the recorded video data set, the played video data corresponding to the play perspective parameter, and specifically configured to:
acquiring recording visual angle parameters corresponding to each recording video data in the recording video data set to generate a recording visual angle parameter set;
acquiring a first recording visual angle parameter matched with the playing visual angle parameter and a second recording visual angle parameter which is left-right adjacent to the first recording visual angle parameter from the recording visual angle parameter set;
and acquiring target recorded video data corresponding to the first recorded view angle parameter and the second recorded view angle parameter from the recorded video data set, and taking the target recorded video data as playing video data.
9. The system of claim 8, wherein the server is configured to obtain a recording view parameter corresponding to each piece of recorded video data in the set of recorded video data, so as to generate a set of recording view parameters, and further configured to:
sending the recording visual angle parameter set to the video playing terminal;
the video playing terminal is further used for receiving and displaying the recording visual angle parameter set.
10. The system of claim 7, wherein the server is further configured to, after receiving the recorded video data set corresponding to the program identifier collected by the video recording terminal in real time, further:
converting the recorded video data set into a code stream data set for storage;
selecting the playing video data corresponding to the playing view angle parameter from the recorded video data set, wherein the selecting step comprises the following steps:
and selecting the playing code stream data corresponding to the playing visual angle parameter from the code stream data set.
11. A server, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the steps of:
receiving a video acquisition request sent by a video playing terminal when a three-dimensional video playing signal input aiming at a target program is detected, and acquiring a program identifier and playing view angle parameters of the target program in the video acquisition request, wherein the playing view angle parameters comprise one selected view angle parameter in the playable recording view angle parameters of the target program displayed by the video playing terminal in a list form or comprise default view angle parameters when not selected by a user, and the playable recording view angle parameters of the target program refer to a plurality of recording view angle parameters respectively corresponding to a plurality of video recording terminals of the target program; the video playing terminal supports a three-dimensional playing mode and a two-dimensional playing mode;
acquiring a first recording visual angle parameter matched with the playing visual angle parameter and a second recording visual angle parameter adjacent to the first recording visual angle parameter left and right, acquiring target recording video data corresponding to the first recording visual angle parameter and the second recording visual angle parameter, and taking the target recording video data as playing video data, wherein the left and right adjacent visual angle parameters corresponding to each recording visual angle parameter are obtained in advance and cached, and the second recording visual angle parameter is obtained from the cache; converting the playing video data to obtain parallel data by adopting a three-dimensional synthesis processing technology, performing three-dimensional synthesis processing on the parallel data, obtaining a first storage address of the playing video data after the three-dimensional synthesis processing, and sending the first storage address to the video playing terminal so that the video playing terminal plays the playing video data after the three-dimensional synthesis processing based on the first storage address;
receiving a video switching request generated when the video playing terminal detects a view angle parameter switching signal and a view angle switching parameter carried in the view angle parameter switching signal is different from the playing view angle parameter, wherein the video switching request carries the view angle switching parameter, and the view angle switching parameter is one view angle parameter selected by a user from the playable recording view angle parameters of the target program;
acquiring recorded video data corresponding to the view switching parameter and recorded video data corresponding to the recorded view parameters adjacent to the left and right of the view switching parameter as switched video data;
and performing three-dimensional synthesis processing on the switching video data, acquiring a second storage address of the switching video data after the three-dimensional synthesis processing, and sending the second storage address to the video playing terminal, so that the video playing terminal plays the switching video data after the three-dimensional synthesis processing based on the second storage address.
12. A computer storage medium having stored thereon a plurality of instructions adapted to be loaded and executed by a processor to:
identifying received operation, and when a three-dimensional video playing signal input aiming at a target program is detected, acquiring playing view angle parameters carried by the three-dimensional video playing signal and a program identifier of the target program, wherein the playing view angle parameters comprise one selected view angle parameter in the playable recording view angle parameters of the target program displayed in a list form by a video playing terminal or comprise default view angle parameters when a user does not select the view angle parameters, and the playable recording view angle parameters of the target program refer to a plurality of recording view angle parameters corresponding to a plurality of video recording terminals of the target program respectively;
sending a video acquisition request carrying the program identifier and the playing view angle parameter to a server, so that the server acquires a first recording view parameter matched with the playing view parameter and a second recording view parameter adjacent to the first recording view parameter left and right, acquires target recorded video data corresponding to the first recording view parameter and the second recording view parameter, and takes the target recorded video data as playing video data, wherein, the left and right adjacent visual angle parameters corresponding to each recording visual angle parameter are obtained in advance and cached, the second recording visual angle parameter is obtained from the cache and adopts a three-dimensional synthesis processing technology, converting the playing video data to obtain parallel data, and after three-dimensional synthesis processing is carried out on the parallel data, obtaining a first storage address of the playing video data after the three-dimensional synthesis processing;
playing the playing video data after the three-dimensional synthesis processing based on the first storage address;
when a view angle parameter switching signal is detected, obtaining a view angle switching parameter carried by the view angle parameter switching signal, wherein the view angle switching parameter is one view angle parameter selected by a user from the playable recording view angle parameters of the target program; if the visual angle switching parameter is different from the playing visual angle parameter, generating a video switching request carrying the visual angle switching parameter and sending the video switching request to the server;
after receiving a second storage address sent by a server, playing the switching video data after the three-dimensional synthesis processing based on the second storage address; and
the instructions are adapted to be loaded and executed by a processor to:
receiving a video acquisition request sent by a video playing terminal when a three-dimensional video playing signal input for a target program is detected, and acquiring a program identifier and a playing view angle parameter of the target program in the video acquisition request, wherein the received operation is identified by the video playing terminal; the video playing terminal supports a three-dimensional playing mode and a two-dimensional playing mode;
acquiring a first recording visual angle parameter matched with the playing visual angle parameter and a second recording visual angle parameter adjacent to the first recording visual angle parameter left and right, acquiring target recording video data corresponding to the first recording visual angle parameter and the second recording visual angle parameter, and taking the target recording video data as playing video data, wherein the left and right adjacent visual angle parameters corresponding to each recording visual angle parameter are obtained in advance and cached, and the second recording visual angle parameter is obtained from the cache;
converting the playing video data to obtain parallel data by adopting a three-dimensional synthesis processing technology, performing three-dimensional synthesis processing on the parallel data, obtaining a first storage address of the playing video data after the three-dimensional synthesis processing, and sending the first storage address to the video playing terminal so that the video playing terminal plays the playing video data after the three-dimensional synthesis processing based on the first storage address;
receiving the video switching request, and acquiring recorded video data corresponding to the view switching parameter and recorded video data corresponding to the recorded view parameters adjacent to the left and right of the view switching parameter as switched video data;
and performing three-dimensional synthesis processing on the switching video data, acquiring a second storage address of the switching video data after the three-dimensional synthesis processing, and sending the second storage address to the video playing terminal.
CN201810161973.1A 2018-02-26 2018-02-26 Video playing method and device, system, storage medium, terminal and server thereof Active CN110198457B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810161973.1A CN110198457B (en) 2018-02-26 2018-02-26 Video playing method and device, system, storage medium, terminal and server thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810161973.1A CN110198457B (en) 2018-02-26 2018-02-26 Video playing method and device, system, storage medium, terminal and server thereof

Publications (2)

Publication Number Publication Date
CN110198457A CN110198457A (en) 2019-09-03
CN110198457B true CN110198457B (en) 2022-09-02

Family

ID=67750797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810161973.1A Active CN110198457B (en) 2018-02-26 2018-02-26 Video playing method and device, system, storage medium, terminal and server thereof

Country Status (1)

Country Link
CN (1) CN110198457B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110856010A (en) * 2019-11-27 2020-02-28 北京翔云颐康科技发展有限公司 Video playing method and device, storage medium and electronic equipment
CN113794936B (en) * 2021-09-09 2023-06-13 北京字节跳动网络技术有限公司 Method, device, system, equipment and medium for generating highlight instant

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008193693A (en) * 2002-12-13 2008-08-21 Sharp Corp Image data creation device, and image data reproduction device
CN102157011A (en) * 2010-12-10 2011-08-17 北京大学 Method for carrying out dynamic texture acquisition and virtuality-reality fusion by using mobile shooting equipment
CN107094263A (en) * 2017-03-13 2017-08-25 华为技术有限公司 A kind of video broadcasting method, user terminal and server
CN107333162A (en) * 2017-06-26 2017-11-07 广州华多网络科技有限公司 A kind of method and apparatus for playing live video

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101453662B (en) * 2007-12-03 2012-04-04 华为技术有限公司 Stereo video communication terminal, system and method
CA2948642A1 (en) * 2014-05-29 2015-12-03 Nextvr Inc. Methods and apparatus for delivering content and/or playing back content
CN104602129B (en) * 2015-01-27 2018-03-06 三星电子(中国)研发中心 The player method and system of interactive multi-angle video
CN107197318A (en) * 2017-06-19 2017-09-22 深圳市望尘科技有限公司 A kind of real-time, freedom viewpoint live broadcasting method shot based on multi-cam light field

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008193693A (en) * 2002-12-13 2008-08-21 Sharp Corp Image data creation device, and image data reproduction device
CN102157011A (en) * 2010-12-10 2011-08-17 北京大学 Method for carrying out dynamic texture acquisition and virtuality-reality fusion by using mobile shooting equipment
CN107094263A (en) * 2017-03-13 2017-08-25 华为技术有限公司 A kind of video broadcasting method, user terminal and server
CN107333162A (en) * 2017-06-26 2017-11-07 广州华多网络科技有限公司 A kind of method and apparatus for playing live video

Also Published As

Publication number Publication date
CN110198457A (en) 2019-09-03

Similar Documents

Publication Publication Date Title
JP6309749B2 (en) Image data reproducing apparatus and image data generating apparatus
US11330310B2 (en) Encoding device and method, reproduction device and method, and program
US8390674B2 (en) Method and apparatus for reducing fatigue resulting from viewing three-dimensional image display, and method and apparatus for generating data stream of low visual fatigue three-dimensional image
KR101210315B1 (en) Recommended depth value for overlaying a graphics object on three-dimensional video
US11539983B2 (en) Virtual reality video transmission method, client device and server
KR102246305B1 (en) Augmented media service providing method, apparatus thereof, and system thereof
JP2015187797A (en) Image data generation device and image data reproduction device
KR101883018B1 (en) Method and device for providing supplementary content in 3d communication system
US20150304640A1 (en) Managing 3D Edge Effects On Autostereoscopic Displays
KR20120011573A (en) A system, an apparatus and a method for displaying a 3-dimensional image
US20170225077A1 (en) Special video generation system for game play situation
CN110198457B (en) Video playing method and device, system, storage medium, terminal and server thereof
JP2017123503A (en) Video distribution apparatus, video distribution method and computer program
CN114449303A (en) Live broadcast picture generation method and device, storage medium and electronic device
KR101430985B1 (en) System and Method on Providing Multi-Dimensional Content
CN109862385B (en) Live broadcast method and device, computer readable storage medium and terminal equipment
JP6934052B2 (en) Display control device, display control method and program
KR20110060180A (en) Method and apparatus for producing 3d models by interactively selecting interested objects
EP2590419A2 (en) Multi-depth adaptation for video content
KR101085718B1 (en) System and method for offering augmented reality using server-side distributed image processing
CN108683900B (en) Image data processing method and device
CN111726598A (en) Image processing method and device
Tekalp et al. Special Issue on 3-D Media and Displays [Scanning the Issue]
KR101221540B1 (en) Interactive media mapping system and method thereof
Nagao et al. Arena-style immersive live experience (ILE) services and systems: Highly realistic sensations for everyone in the world

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant