CN113938711A - Visual angle switching method and device, user side, server and storage medium - Google Patents
Visual angle switching method and device, user side, server and storage medium Download PDFInfo
- Publication number
- CN113938711A CN113938711A CN202111194139.0A CN202111194139A CN113938711A CN 113938711 A CN113938711 A CN 113938711A CN 202111194139 A CN202111194139 A CN 202111194139A CN 113938711 A CN113938711 A CN 113938711A
- Authority
- CN
- China
- Prior art keywords
- view
- video
- target video
- visual angle
- angle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 218
- 238000000034 method Methods 0.000 title claims abstract description 130
- 238000004891 communication Methods 0.000 claims description 34
- 230000004927 fusion Effects 0.000 claims description 31
- 238000004590 computer program Methods 0.000 claims description 16
- 230000002194 synthesizing effect Effects 0.000 claims description 4
- 230000008569 process Effects 0.000 abstract description 44
- 230000000875 corresponding effect Effects 0.000 description 67
- 238000010586 diagram Methods 0.000 description 9
- 239000011159 matrix material Substances 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 3
- 230000002596 correlated effect Effects 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 238000003786 synthesis reaction Methods 0.000 description 3
- 238000006073 displacement reaction Methods 0.000 description 2
- 230000011664 signaling Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/239—Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
- H04N21/2393—Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/437—Interfacing the upstream path of the transmission network, e.g. for transmitting client requests to a VOD server
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47202—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The embodiment of the application provides a visual angle switching method, a visual angle switching device, a user side, a server and a storage medium. The scheme is as follows: sending a first acquisition request aiming at a target video under a first visual angle to a server; receiving a first video set of target videos under a plurality of visual angles returned by a server, wherein the plurality of visual angles comprise a first visual angle; playing a target video under a first visual angle in a first video set; when a visual angle switching instruction for switching a first visual angle to a second visual angle is received, switching a currently played target video at the first visual angle to a target video at the second visual angle based on the first video set. Through the technical scheme provided by the embodiment of the application, the time required by the visual angle switching process is shortened, and the probability of the Kanton phenomenon is reduced.
Description
Technical Field
The present application relates to the field of streaming media technologies, and in particular, to a method and an apparatus for switching a view angle, a client, a server, and a storage medium.
Background
In the process of live video or video shooting, the process of switching the view angle is often involved. For example, in a live video process, a user may select to switch a currently played video from a video frame corresponding to a viewing angle a to a video frame corresponding to a viewing angle B, so as to view the video frame corresponding to the viewing angle B.
At present, in the process of switching the viewing angles, since videos corresponding to different viewing angles belong to video streams which are not completely different, in the process of switching the viewing angles, the video streams corresponding to the switched viewing angles need to be acquired again, which causes the required duration of the process of switching the viewing angles to be greatly increased, and even may cause a pause phenomenon to occur in the process of switching the viewing angles.
Disclosure of Invention
An object of the present invention is to provide a method, an apparatus, a client, a server and a storage medium for switching a view angle, so as to shorten a time required for a view angle switching process and reduce a probability of a pause phenomenon. The specific technical scheme is as follows:
in a first aspect of the present application, a method for switching a view angle is provided, where the method is applied to a user side, and the method includes:
sending a first acquisition request aiming at a target video under a first visual angle to a server;
receiving a first video set of target videos under a plurality of views returned by the server, wherein the plurality of views comprise the first view;
playing a target video under the first visual angle in the first video set;
and when a visual angle switching instruction for switching the first visual angle to the second visual angle is received, switching the currently played target video at the first visual angle to the target video at the second visual angle based on the first video set.
Optionally, a resolution of the target video at the first view in the first video set is higher than a resolution of the target video at a third view, where the third view is any view of the multiple views except the first view.
Optionally, the resolution of the target video at each third view angle in the first video set is in negative correlation with the target distance corresponding to the third view angle, and the target distance corresponding to the third view angle is the distance between the camera at the third view angle and the camera at the first view angle.
Optionally, the method further includes:
when a target video under a first currently played view angle is switched to a target video under a second view angle included in the first video set, sending a second acquisition request aiming at the target video under the second view angle to the server;
receiving a second video set of the target video at a plurality of visual angles returned by the server; the resolution of the target video at the second view angle in the second video set is higher than the resolution of the target video at the first view angle in the first video set;
and switching the currently played target video under the second visual angle into the target video under the second visual angle in the second video set.
Optionally, when receiving a view switching instruction for switching the first view to the second view, the step of switching the currently played target video at the first view to the target video at the second view based on the first video set includes:
when a view switching instruction for switching the first view to a second view is received, if the first video set includes the target video at the second view, the target video at the second view in the first video set is played based on a time point of the currently played target video at the first view.
Optionally, when receiving a view switching instruction for switching the first view to the second view, the step of switching the currently played target video at the first view to the target video at the second view based on the first video set includes:
when a view switching instruction for switching the first view to a second view is received, if the target video under the second view is not included in the first video set, synthesizing the target video under the second view based on the target video under each view in the first video set;
and playing the target video under the second visual angle based on the currently played time point of the target video under the first visual angle.
Optionally, if the target video is a live video, each target video in the first video set is a foreground image group and a foreground depth image group in a live scene;
the step of playing the target video in the first video set under the first view angle includes:
acquiring a preset background image group and a preset background depth image group;
acquiring a foreground image group and a foreground depth image group under a first visual angle in the first video set;
performing image fusion on the preset background image group and the foreground image group under the first visual angle based on the preset background depth image group and the foreground depth image group under the first visual angle to obtain a first fusion video under the first visual angle;
playing the first fused video;
the step of switching the currently played target video at the first view angle to the target video at the second view angle based on the first video set includes:
acquiring the preset background image group and the preset background depth image group;
acquiring a foreground image group and a foreground depth image group under a second visual angle based on the first video set;
performing image fusion on the preset image group and the foreground image group under the second visual angle based on the preset background depth image group and the foreground depth image group under the second visual angle to obtain a second fusion video under the second visual angle;
and playing the second fusion video.
In a second aspect of the present application, there is also provided a method for switching a view angle, which is applied to a server, and the method includes:
receiving a first acquisition request aiming at a target video under a first visual angle, which is sent by a user side;
acquiring a target video under a plurality of visual angles based on the first acquisition request to obtain a first video set, wherein the plurality of visual angles comprise the first visual angle;
and sending the first video set to the user side, so that the user side plays the target video under the first visual angle in the first video set after receiving the first video set, and switches the currently played target video under the first visual angle to the target video under the second visual angle based on the first video set when receiving a visual angle switching instruction for switching the first visual angle to the second visual angle.
In a third aspect of the present application, there is provided a device for switching a viewing angle, where the device is applied to a user side, and the device includes:
the first sending module is used for sending a first acquisition request aiming at a target video under a first visual angle to a server;
a first receiving module, configured to receive a first video set of target videos under multiple views returned by the server, where the multiple views include the first view;
a playing module, configured to play a target video in the first video set at the first view angle;
and the first switching module is used for switching the currently played target video under the first visual angle to the target video under the second visual angle on the basis of the first video set when a visual angle switching instruction for switching the first visual angle to the second visual angle is received.
In a fourth aspect of this embodiment, a device for switching a viewing angle is applied to a server, and the device includes:
the third receiving module is used for receiving a first acquisition request aiming at the target video under the first visual angle, which is sent by the user side;
a first obtaining module, configured to obtain, based on the first obtaining request, a target video at multiple view angles to obtain a first video set, where the multiple view angles include the first view angle;
a third sending module, configured to send the first video set to the user side, so that the user side plays the target video in the first view in the first video set after receiving the first video set, and when receiving a view switching instruction for switching the first view to a second view, switches the currently played target video in the first view to the target video in the second view based on the first video set.
In a fifth aspect of the present application, there is further provided a user end, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the steps of the method of switching between views as claimed in any preceding claim when executing a program stored in the memory.
In a sixth aspect of this application implementation, there is also provided a server, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the steps of the method of switching between views as claimed in any preceding claim when executing a program stored in the memory.
In a seventh aspect implemented in the present application, there is further provided a computer-readable storage medium, in which a computer program is stored, and the computer program, when being executed by a processor, implements the steps of the method for switching a viewing angle according to any one of the above claims.
In an eighth aspect of this application implementation, there is also provided a computer program product containing instructions that, when run on a computer, cause the computer to perform any of the above-described perspective switching methods.
According to the technical scheme provided by the embodiment of the application, when the client requests the server to acquire the target video at the first visual angle, the server sends the first video set comprising the target videos at the multiple visual angles to the client, namely when the server sends the target video at the first visual angle to the client, the server also sends the target videos at other visual angles to the client, so that when the client receives a visual angle switching instruction for switching the first visual angle into the second visual angle, the client can directly play the target video at the second visual angle according to the received first video set. In the visual angle switching process of the target video, the user side does not need to request the server for acquiring the target video under the second visual angle again, the acquisition time of the target video under the second visual angle is effectively shortened, the time required by the visual angle switching process is shortened, and the probability of the pause phenomenon is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a schematic diagram of a live scene;
fig. 2 is a first flowchart illustrating a method for switching a viewing angle according to an embodiment of the present disclosure;
fig. 3 is a second flowchart illustrating a method for switching a viewing angle according to an embodiment of the present disclosure;
fig. 4 is a third flowchart illustrating a method for switching a viewing angle according to an embodiment of the present disclosure;
fig. 5 is a fourth flowchart illustrating a method for switching a viewing angle according to an embodiment of the present disclosure;
fig. 6 is a fifth flowchart illustrating a method for switching a viewing angle according to an embodiment of the present disclosure;
fig. 7 is a sixth flowchart illustrating a method for switching a viewing angle according to an embodiment of the present disclosure;
fig. 8 is a seventh flowchart illustrating a method for switching a viewing angle according to an embodiment of the present disclosure;
fig. 9 is an eighth flowchart illustrating a method for switching a viewing angle according to an embodiment of the present application;
fig. 10 is a signaling diagram of a view switching process according to an embodiment of the present application;
fig. 11 is a first structural diagram of a viewing angle switching device according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of a second structure of a viewing angle switching apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a user terminal according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
As shown in fig. 1, fig. 1 is a schematic diagram of a certain live scene. In the live scene shown in fig. 1, 7 cameras are arranged around the stage, i.e., the cameras 1 to 7 in fig. 1.
In order to facilitate the user to switch the viewing angle of the played video, a large number of cameras need to be deployed in the live scene to shoot the stage. The process of switching the viewing angle of the video viewed by the user is taken as an example for explanation.
Now, assume that the video currently viewed by the user is the video captured by the camera 4 in fig. 1. At some point, the user wants to watch the video from the viewing angle of the camera 2. At this time, the user may trigger the user end to send a video acquisition request for acquiring the video stream from the viewing angle of the camera 2 to the server. After receiving the video acquisition request, the server may acquire a video stream captured by the camera 2 and send the acquired video stream to the user side, so that the user can view the video at the viewing angle of the camera 2.
In the process of switching the view angle, the user side needs to acquire the video stream corresponding to the switched view angle again, which increases the time required by the process of switching the view angle, and the setting may have a pause phenomenon.
In order to solve the problems in the related art, embodiments of the present application provide a method for switching a viewing angle. As shown in fig. 2, fig. 2 is a first flowchart of a method for switching a viewing angle according to an embodiment of the present disclosure. The method is applied to the user side and specifically comprises the following steps.
Step S201, a first acquisition request for a target video at a first view angle is sent to a server.
Step S202, a first video set of target videos under multiple views returned by the server is received, and the multiple views comprise the first view.
Step S203, a target video in the first video set under the first view angle is played.
Step S204, when receiving a view switching instruction for switching the first view to the second view, switching the currently played target video at the first view to the target video at the second view based on the first video set.
Through the method shown in fig. 2, when the client requests the server to acquire the target video at the first view angle, the server sends the first video set including the target videos at the multiple view angles to the client, that is, when the server sends the target video at the first view angle to the client, the server also sends the target videos at other view angles to the client, so that when the client receives a view angle switching instruction for switching the first view angle to the second view angle, the client can directly play the target video at the second view angle according to the received first video set. In the visual angle switching process of the target video, the user side does not need to request the server for acquiring the target video under the second visual angle again, the acquisition time of the target video under the second visual angle is effectively shortened, the time required by the visual angle switching process is shortened, and the probability of the pause phenomenon is reduced.
The following examples are given to illustrate the examples of the present application.
For the above step S201, a first acquisition request for the target video in the first view angle is sent to the server.
In this step, the user may trigger the viewing operation of the target video at the first viewing angle by using the user side. At this time, the user side sends a video acquisition request (denoted as a first acquisition request) for the target video at the first view angle to the server.
For ease of understanding, a main live scene is taken as an example. When a user watches a live broadcast room of a certain anchor by using a user side such as a mobile device or a notebook, that is, when the user watches the live broadcast room of the certain anchor by clicking operation, the user side sends a first acquisition request of a video stream corresponding to the live broadcast room to a stream pushing server of a live broadcast video.
In an optional embodiment, the first obtaining request may include a video identifier of the target video. Taking a live video with a target video as a main broadcast as an example, the first obtaining request may include a live broadcast room number corresponding to the main broadcast.
The video identifier of the target video may be represented by other information, such as a title, a name, etc. of the video, in addition to the live number. Here, the video identification of the target video is not particularly limited.
In another optional embodiment, the first obtaining request may include a video identifier of the target video and a view identifier of the first view.
The above-mentioned view identifier of the first view can be expressed in various ways. Still taking the above fig. 1 as an example, the view angle identifier of the view angle corresponding to the camera 4 can be represented as: the included angle between the center point of the stage and the center point of the camera lens. Here, the indication manner of the view angle indication of the first view angle is not particularly limited.
In this embodiment of the application, the first view angle may be a default view angle corresponding to the target video. Still taking the live scene of the anchor as an example, the first viewing angle may be a direction in which the shooting direction of the camera is opposite to the anchor. Here, the first viewing angle is not particularly limited.
In the embodiment of the present application, the user side and the server may also be changed according to different actual application scenarios. Taking the user side as an example, when the target video is a live tv video, the user side may be a tv or a mobile phone. When the target video is a live webcast video, the user side can be a mobile phone or a computer. Here, the user side and the server are not particularly limited.
With respect to step S202, a first video set of target videos under multiple views returned by the server is received, where the multiple views include the first view.
In this step, in an actual shooting scene, corresponding cameras may be set at a plurality of different viewing angles, so as to shoot target videos at the plurality of different viewing angles (for convenience of distinction, recorded as target videos at a first number of different viewing angles). And the server stores the shot target videos under a plurality of visual angles. After receiving the first acquisition request, the server acquires target videos at multiple viewing angles (for convenience of distinction, recorded as target videos at a second number of different viewing angles) from target videos stored in the server as a first video set. The server may send the first set of videos to the user side. At this time, the user end receives the first video set sent by the server.
In the embodiment of the application, the server stores target videos of different viewing angles in a plurality of shooting scenes. For convenience of understanding, the following description will be given by taking target videos at multiple viewing angles in a certain shooting scene as an example, and is not intended to be limiting. The target video at the first number of viewing angles may be a plurality of video streams captured by different cameras at different capturing viewing angles.
For ease of understanding, the live scene shown in fig. 1 is still used as an example. The live scene is provided with 7 cameras, namely, the camera 1-the camera 7, and the shooting visual angle of each camera shooting the stage is different. Therefore, when the camera 1-the camera 7 take a picture, 7 video streams at different viewing angles are obtained. The server can acquire video streams shot by the cameras 1 to 7 correspondingly and store the acquired video streams.
In an optional embodiment, when acquiring the first video set, the server may use the target videos at all the viewing angles stored by the server as the target videos included in the first video set. Still taking the above fig. 1 as an example, the server may use the video streams corresponding to the cameras 1 to 7 as the video streams included in the first video set. In this case, the second number is equal to the first number.
In another optional embodiment, when acquiring the first video set, the server may acquire, with the first view angle as a center, a target video at each view angle within a preset range of the server as a target video included in the first video set. Still taking the live broadcast scene shown in fig. 1 as an example, if the first view angle is the shooting view angle corresponding to the camera 4, the server may use the video streams of the cameras 2 to 6 as the video streams included in the first video set. In this case, the second number is smaller than the first number.
In yet another optional embodiment, when the server acquires the first video set, while the target video at the first view angle is used as the target video included in the first video set, the server may further use the target video at the remaining other view angles as the target video included in the first video set according to a preset selection rule, for example, an equal interval selection or a random selection mode. In this case, the second number is smaller than the first number.
Still taking the live broadcast scene shown in fig. 1 as an example, if the first view angle is the shooting view angle corresponding to the camera 4, the server may determine the video stream of the camera 4 as the video stream included in the first video set, and for the video streams corresponding to the cameras 1 to 3 and the cameras 5 to 7, the server may select the video streams corresponding to the cameras 1, 3, 5, and 7 as the video streams included in the first video set.
Here, the selection manner of the target video in the plurality of views included in the first video set is not particularly limited.
In an optional embodiment, a resolution of the target video at the first view in the first video set is higher than a resolution of the target video at a third view, where the third view is any view of the plurality of views except the first view. I.e. the resolution of the target video at the first view in the first set of videos is highest.
For ease of understanding, the resolution of the video captured by the camera is: w × h × 3 is an example for explanation. Where w represents the width of each video frame in the video, h represents the height of each video frame in the video, and 3 represents 3 color channels of Red Green Blue (RGB).
In the first video set, the resolution of the target video in the first view may be w × h × 3, and the resolution of the target video in each third view may be lower than w × h × 3, such as α · w × α · h × 3, where α is a preset coefficient, and 0 < α < 1.
In an optional embodiment, the resolution of the target video at each third view angle in the first video set is inversely related to the target distance corresponding to the third view angle, and the target distance corresponding to the third view angle is the distance between the camera at the third view angle and the camera at the first view angle. That is, for the target video at each third view angle in the first video set, when the target distance corresponding to the third view angle is smaller, the resolution of the target video at the third view angle is higher; when the target distance corresponding to the third view angle is larger, the resolution of the target video in the third view angle is lower.
For the determination of the resolution of the target video at each view in the first video set, reference may be made to the following description, which is not specifically illustrated here.
In this embodiment of the application, when the resolution of the target video at the first view angle in the first video set is the highest, the total data size corresponding to each target video in the first video set can be effectively reduced, which effectively reduces network resources required by the server to send the first video set to the user side, and improves the sending efficiency of the first video set.
In step S203, the target video in the first view angle in the first video set is played.
In this step, after receiving the first video set, the user side may play the target video at the first view angle in the first video set.
In step S204, when a view switching instruction for switching the first view to the second view is received, the currently played target video in the first view is switched to the target video in the second view based on the first video set.
In this step, in the playing process of the target video at the first viewing angle, the user can perform viewing angle switching on the viewing angle of the viewed video at any time. At this time, the user side receives a view switching instruction, where the view switching instruction includes a view identifier of the second view. The view identifier of the second view is used for indicating that the view of the currently played target video is switched to the second view. I.e. switching the above-mentioned first viewing angle to the second viewing angle. After receiving the view switching instruction, the user side can determine that the view corresponding to the currently played video needs to be switched. At this time, the user side may switch the currently played target video to the target video at the second view angle according to the first video set, that is, switch the target video at the first view angle to the target video at the second view angle.
In this embodiment of the application, the second angle of view may be a real shooting angle of view corresponding to the camera, or may also be a shooting angle of view (i.e., a virtual angle of view) corresponding to the virtual camera.
In an optional embodiment, when the second angle of view is a real shooting angle of view corresponding to the camera, the server stores the target video corresponding to the second angle of view. At this time, the first video set may include the target video at the second view angle, or may not include the target video at the second view angle.
In another optional embodiment, when the second angle of view is a shooting angle of view corresponding to the virtual camera, the target video at the second angle of view will not be included in the target videos at the multiple angles of view stored by the server. At this time, the target video at the second view angle is not included in the first video set.
In two cases where the first video set includes the target video at the second view angle and does not include the target video at the second view angle, the above view angle switching process is different, and specific reference may be made to the following description, which is not repeated herein.
Through the step S204, after receiving the view switching instruction, the user side can switch views in time according to the target video included in the first video set. Because the visual angle switching process is arranged in the user side, the required time for visual angle switching is effectively shortened, meanwhile, system resources in the server are effectively saved, and the stability of the server is ensured.
In an optional embodiment, when the first video set includes a target video at a second view angle, according to the method shown in fig. 2, an embodiment of the present application further provides a view angle switching method, as shown in fig. 3, and fig. 3 is a second flowchart of the view angle switching method provided in the embodiment of the present application. The method comprises the following steps.
Step S301, a first acquisition request for a target video at a first view angle is sent to a server.
Step S302, a first video set of target videos under multiple visual angles returned by the server is received, and the multiple visual angles comprise the first visual angle.
Step S303, a target video at a first view angle in the first video set is played.
The above steps S301 to S303 are the same as the above steps S201 to S203.
Step S304, when receiving a view switching instruction for switching the first view to the second view, if the first video set includes the target video at the second view, playing the target video at the second view in the first video set based on the currently played time point of the target video at the first view.
In this embodiment of the application, each target video in the first video set is obtained by shooting the same shooting scene from different viewpoints by a corresponding camera, and therefore, the time period corresponding to each target video in the first video set is the same. When the first video set includes the target video at the second view angle, the user side may start to play the target video at the second view angle from the same time point according to the time point at which the target video is played at the first view angle at the current time.
In the visual angle switching process, the target video at the second visual angle is played through the time point of playing the target video at the first visual angle, so that the continuity of the content between the target video before the visual angle is switched and the target video after the visual angle is switched can be effectively ensured, and the continuity and the fluency of video playing are ensured.
In an optional embodiment, when the first video set does not include the target video at the second view angle, according to the method shown in fig. 2, an embodiment of the present application further provides a view angle switching method. As shown in fig. 4, fig. 4 is a third flowchart illustrating a method for switching a viewing angle according to an embodiment of the present application. The method comprises the following steps.
Step S401, sends a first acquisition request for a target video at a first view angle to a server.
Step S402, a first video set of target videos under multiple visual angles returned by the server is received, and the multiple visual angles comprise the first visual angle.
Step S403, playing the target video in the first view angle in the first video set.
The above-described steps S401 to S403 are the same as the above-described steps S201 to S203.
Step S404, when receiving a view switching instruction for switching the first view to the second view, if the first video set does not include the target video at the second view, synthesizing the target video at the second view based on the target video at each view in the first video set.
In this embodiment of the application, when the second view angle is a virtual view angle, the first video set received by the user side does not include the target video at the second view angle. Or, since the server does not take the target video at the second view angle as the target video in the first video set, the first video set received by the user end will not include the target video at the second view angle. And after receiving the visual angle switching instruction, the user side synthesizes the target video under the second visual angle according to the target video under each visual angle included in the first video set.
In an optional embodiment, when synthesizing the target video at the second view angle based on the target video at each view angle in the first video set, the user terminal may select the target video at the view angle of one or more cameras closest to the second view angle, and synthesize the target video at the second view angle.
For ease of understanding, the above description is given by taking the live scene shown in fig. 1 as an example. Now, assume that the camera 1 in fig. 1 is a virtual camera, and the second view angle is a shooting view angle corresponding to the position of the camera 1. The first video set consists of the videos captured by the cameras 2-7.
In order to switch to the video at the second viewing angle, the user side may synthesize the video at the viewing angle corresponding to the virtual camera by using the video captured by the camera 2.
For the sake of understanding, the video synthesis process will be described by taking only an example in which one frame of image (i.e., image a) in the video captured by the camera 2 is synthesized into one frame of image (i.e., image B) in the video at the angle of view corresponding to the virtual camera.
The user side can project the image A to an image coordinate system corresponding to the shooting visual angle of the virtual camera to obtain a projected image containing the cavity area. The user side can determine the image information of each pixel point in the cavity area by using the image information of the pixel points around the cavity area, such as RGB values and the like, so that the cavity is filled according to the determined image information to obtain an image B. By analogy, the user side can project each frame of image in the video shot by the camera 2, so as to obtain the video under the view angle corresponding to the virtual camera.
Step S405, based on the time point of the currently played target video at the first viewing angle, playing the target video at the second viewing angle.
Through the steps S404 and S405, the user side can synthesize the target video at the second view angle according to the target video at each view angle in the first video set, thereby completing the view angle switching process, reducing the number of cameras to be deployed in an actual scene, and reducing the deployment cost of camera deployment in a shooting scene while achieving view angle switching.
In this embodiment, when the user terminal synthesizes the target video at the second view angle, the target video at the second view angle is synthesized according to the target video at each view angle included in the first video set. At this time, the user does not need to acquire the target video at each view angle corresponding to the target video at the second view angle again, so that the user can synthesize the target video at the second view angle in time, and the view angle switching process of the currently played target video is completed. Under the condition that the visual angle switching is successfully completed, the time required by video synthesis is effectively reduced, so that the time required by the visual angle switching process is shortened, and the probability of the pause phenomenon in the visual angle switching process is effectively reduced.
In addition, even if the viewing angle switching process is subject to the pause phenomenon due to the long time required for the video composition process, for example, when the target video at the second viewing angle is composed by using all the target videos in the first video set, the data amount of all the target videos is relatively large, which may result in the long time required for the video composition, thereby causing the pause phenomenon in the viewing angle switching process. At this time, since the target video corresponding to each view does not need to be re-acquired, the duration of the pause phenomenon caused by the application of the embodiment of the present application is also smaller than the duration of the pause phenomenon caused by the synthesis of the target video corresponding to the second view in the related art.
In an optional embodiment, if the target video is a live video, each target video in the first video set is a foreground group and a foreground depth group in a live scene.
In an optional embodiment, when the target video is a live video, according to the method shown in fig. 2, an embodiment of the present application further provides a view switching method. As shown in fig. 5, fig. 5 is a fourth flowchart illustrating a method for switching a viewing angle according to an embodiment of the present application. The method comprises the following steps.
Step S501, a first acquisition request for a target video at a first view angle is sent to a server.
Step S502, receiving a first video set of target videos under multiple views returned by the server, where the multiple views include the first view.
The above steps S501 to S502 are the same as the above steps S201 to S202.
Step S503, a preset background map group and a preset background depth map group are obtained.
In an optional embodiment, the preset background image group and the preset background depth image group are multiple frames of video images obtained by shooting in advance and depth images corresponding to each frame of video image.
The description will be given by taking a live video as a live video of a certain anchor as an example. The preset background image group may be a set of a plurality of images obtained by the camera performing image acquisition on the shooting scene when the anchor is not live (i.e., the anchor is not in the shooting scene). The preset background depth map group is a set of depth images corresponding to each image in the preset background map group.
The shooting scene corresponding to the preset background image group can be a shooting scene during anchor live broadcasting, or a scene corresponding to any one of the pre-selected shooting places. Here, the shooting scene corresponding to the preset background map group is not particularly limited.
In an alternative embodiment, the preset background depth image group may be obtained by shooting the shooting scene by using a depth camera. The depth camera has the same shooting angle of view as the camera.
In another alternative embodiment, the preset background depth image group may be obtained by processing each image in the preset background image group by using a pre-trained high-precision depth estimation model. The high-precision depth estimation model in the related art can be used in the embodiment of the present application, and the high-precision depth estimation model is not particularly limited.
The preset background map group may include a plurality of background images, and the preset background depth map group may include a plurality of background depth images. The number of the background images included in the preset background image group is the same as the number of the background depth images included in the preset background depth image group. Here, the number of background images included in the preset background map group and the number of background depth images included in the preset background depth map group are not particularly limited.
Step S504, a foreground group and a foreground depth group under a first viewing angle in the first video set are obtained.
In this step, the target video at each view angle in the first video set is composed of a foreground group at each view angle and a foreground depth group at each view angle, and the user side can acquire the foreground group at the first view angle and the foreground depth group at the first view angle from the first video set.
For ease of understanding, video a at the time of live broadcast of a certain anchor is taken as an example. The foreground map group may be a set of foreground images in each video frame of the video a, that is, a set of human body images of a host in each video frame of the video a. The foreground depth map set is a set of depth images corresponding to each foreground image.
In this embodiment of the application, when the foreground image group is obtained from the video shot by the camera, the foreground and the background in each video frame may be segmented by using a real-time segmentation model to obtain the foreground image group. For example, the video is a live video during live broadcast of a certain main broadcast, and a human body image in each video frame in the live video can be obtained by utilizing real-time human body segmentation model segmentation, so that a foreground image group in the live video is obtained.
The obtaining manner of the foreground depth map set may refer to the obtaining manner of the preset background depth map set, and is not specifically described herein.
In the embodiment of the present application, the step S503 may be executed before/after the step S504, or may be executed simultaneously with the step S504. Here, the execution order of step S503 and step S504 is not particularly limited.
Step S505, based on the preset background depth map group and the foreground depth map group under the first viewing angle, performing image fusion on the preset background map group and the foreground map group under the first viewing angle to obtain a first fusion video under the first viewing angle.
In an optional embodiment, when the user performs image fusion on the preset background image group and the foreground image group under the first viewing angle, the user may fuse a target background image in the preset background image group and the foreground image group under the first viewing angle. The view angle corresponding to the target background image is a first view angle, or the view angle corresponding to the target background image may be any view angle. Here, the selection of the target background image is not particularly limited.
In an optional embodiment, in order to weaken the visual influence of the background on the user, when the user performs image fusion on the target background image and the foreground group at the first viewing angle, the user may perform gaussian blurring on the target background image, so as to perform image fusion on the target background image after the gaussian blurring and the foreground group at the first viewing angle.
The image fusion techniques in the related art are all applicable to the embodiment of the present application, and the process of the image fusion is not specifically described here.
Step S506, the first fusion video is played.
In this step, the user side may switch the currently played target video at the first view angle to the first merged video, that is, play the first merged video.
The above steps S503 to S506 are refinements of the above step S203.
Step S507, when receiving a view switching instruction for switching the first view to the second view, acquiring a preset background image group and a preset background depth image group.
Step S508, based on the first video set, a foreground group and a foreground depth group under the second viewing angle are obtained.
In an optional embodiment, when the first video set includes a target video at a second view angle, the user side may obtain the foreground group and the foreground depth group at the second view angle from the first video set. The specific acquisition manner may refer to the acquisition manner of the foreground map group and the foreground depth map group under the first viewing angle, and is not specifically described here.
In another optional embodiment, when the first video set does not include the target video at the second view angle, the user end may synthesize the foreground image group and the foreground depth image group at the second view angle according to the foreground image group and the foreground depth image group at each view angle included in the first video set. The specific composition method may refer to the composition method of the target video in the second view angle, and is not specifically described here.
In the embodiment of the present application, the step S507 may be executed before/after the step S508, or may be executed simultaneously with the step S508. Here, the execution order of step S507 and step S508 is not particularly limited.
Step S509, based on the preset background depth map group and the foreground depth map group under the second viewing angle, performing image fusion on the preset map group and the foreground map group under the second viewing angle to obtain a second fusion video under the second viewing angle.
In the embodiment shown in fig. 5, after receiving the view switching instruction, the user end needs to re-acquire the preset background image group and the preset background depth image group, that is, execute the step S507. In addition, the ue may not perform the step S507 directly, that is, after receiving the view switching command, the ue directly performs the step S508. When the image fusion is performed in step S509, the image fusion may be performed according to the preset background image group and the preset background depth image group acquired in step S503.
Step S510, the second fusion video is played.
The execution process of the above step S507 to step S510 is similar to the execution process of the above step S503 to step S506, and the execution process of the above step S507 to step S510 is not specifically described here.
The above steps S507 to S510 are refinements of the above step S204.
In an alternative embodiment, according to the method shown in fig. 2, an embodiment of the present application further provides a method for switching a viewing angle. As shown in fig. 6, fig. 6 is a fifth flowchart illustrating a method for switching a viewing angle according to an embodiment of the present application. The method adds the following steps, namely step S205-step S207.
Step S205, when the currently played target video at the first view angle is switched to the target video at the second view angle included in the first video set, sends a second acquisition request for the target video at the second view angle to the server.
The second acquisition request includes a video identifier of the target video and a view identifier of the second view.
In this embodiment, the ue can first perform the step S204 and then perform the step S205. The user end may also perform the step S204 and the step S205 at the same time. Here, the execution order of step S204 and step S205 is not particularly limited.
Step S206, receiving a second video set of the target video under the multiple visual angles returned by the server; the resolution of the target video at the second view in the second video set is higher than the resolution of the target video at the first view in the first video set.
In an optional embodiment, a resolution of the target video at the second view in the second video set is higher than a resolution of the target video at a fourth view, where the fourth view is any view of the plurality of views except the second view. I.e., the resolution of the target video at the second view in the second video set is highest.
In an optional embodiment, the resolution of the target video at each fourth view angle in the second video set is inversely related to the target distance corresponding to the fourth view angle, and the target distance corresponding to the fourth view angle is a distance between the camera at the fourth view angle and the camera at the second view angle. That is, for the target video at each fourth view angle in the second video set, when the target distance corresponding to the fourth view angle is smaller, the resolution of the target video at the fourth view angle is higher; when the target distance corresponding to the fourth view angle is larger, the resolution of the target video in the fourth view angle is lower.
Step S207, the currently played target video at the second view angle is switched to the target video at the second view angle in the second video set.
In this step, after receiving the second video set, the user side may switch the currently played target video to the target video in the second view angle in the second video set. That is, the target video at the second view angle in the second video set is played.
The switching manner of the currently played target video at the second view angle in step S207 to the target video at the second view angle in the second video set may refer to the switching manner of the target video at the first view angle to the target video at the second view angle in the first video set, which is not specifically described herein.
Through the above steps S205 to S207, when receiving the view switching instruction for switching the first view to the second view, the user directly performs step S204, so that the target video currently played by the user is the target video in the first video set at the second view, and the resolution of the target video is lower than that of the target video in the first view in the first video set. In order to ensure the resolution of the target video watched by the user, the user side acquires the target video under the second visual angle with high resolution again, plays the acquired target video under the second visual angle again, ensures that the visual angle is switched to the second visual angle in time, and effectively ensures the resolution of the target video under the second visual angle which is played currently and improves the visual effect by switching to the target video under the second visual angle which is acquired again.
In the method shown in fig. 6, the following description will be given by taking an example of re-acquiring the target video at the second view angle with high resolution when the target video at the second view angle is included in the first video set. In addition, when the first video set does not include the target video at the second view angle, the user terminal may also retrieve the target video at the second view angle with high resolution.
For example, the second view is a virtual view, and the target video at the second view is not included in the first video set. The user side may also send the second obtaining request to the server. After receiving the second acquisition request, the server may send, to the user side, a third video set including target videos from multiple views. In the third video set, the resolution of the target video at the real shooting view closest to the second view is the highest.
Based on the same inventive concept, according to the above-mentioned view switching method provided by the embodiment of the present application, the embodiment of the present application further provides a view switching method. As shown in fig. 7, fig. 7 is a sixth flowchart illustrating a method for switching a viewing angle according to an embodiment of the present application. The method is applied to the server and specifically comprises the following steps.
Step S701 is to receive a first acquisition request for a target video at a first viewing angle sent by a user side.
In this step, when the user sends a first acquisition request for the target video at the first view angle to the server, the server receives the first acquisition request.
Step S702, based on the first obtaining request, obtain a target video under multiple viewing angles to obtain a first video set, where the multiple viewing angles include the first viewing angle.
In the embodiment of the application, target videos shot by cameras in the same shooting scene at different shooting visual angles are stored in the server. That is, the server stores the target video in a plurality of viewing angles. After the server receives the first obtaining request, the server may obtain target videos at multiple viewing angles to obtain a first video set.
Step S703 is to send the first video set to the user side, so that the user side plays the target video at the first view angle in the first video set after receiving the first video set, and switches the currently played target video at the first view angle to the target video at the second view angle based on the first video set when receiving the view angle switching instruction for switching the first view angle to the second view angle.
In this step, after obtaining the first video set, the server may send the first video set to the user side. After receiving the first video set sent by the server, the user side can play the target video in the first video set under the first view angle. When the user receives a view switching instruction for switching the first view to the second view, the user may switch the currently played target video at the first view to the target video at the second view based on the received first video set.
Through the method shown in fig. 7, when the client requests the server to acquire the target video at the first view angle, the server sends the first video set including the target videos at the multiple view angles to the client, that is, when the server sends the target video at the first view angle to the client, the server also sends the target videos at other view angles to the client, so that when the client receives a view angle switching instruction for switching the first view angle to the second view angle, the client can directly play the target video at the second view angle according to the received first video set. In the visual angle switching process of the target video, the user side does not need to request the server for acquiring the target video under the second visual angle again, the acquisition time of the target video under the second visual angle is effectively shortened, the time required by the visual angle switching process is shortened, and the probability of the pause phenomenon is reduced.
In an optional embodiment, if the target video is a live video, each target video in the first video set is a foreground group and a foreground depth group in a live scene.
In this embodiment of the application, when the target video is a live broadcast video, the first video set only includes a foreground image group and a foreground depth image group, and the user terminal can perform image fusion on the foreground image group of the preset background image group and the target video according to the foreground depth image group of the preset background depth image group and the target video when the user terminal receives the first video set and plays the target video in the first video set, so as to obtain a fused video, and play the fused video. The data volume of the first video set is reduced, the transmission efficiency of the first video set is improved, and meanwhile, the selectivity of the played target video is increased.
In an optional embodiment, a resolution of the target video at the first view in the first video set is higher than a resolution of the target video at a third view, where the third view is any view of the plurality of views except the first view.
In an optional embodiment, when the resolution of the target video at the first view angle in the first video set is higher than the resolution of the target video at the third view angle, according to the method shown in fig. 7, an embodiment of the present application further provides a view angle switching method. As shown in fig. 8, fig. 8 is a seventh flowchart illustrating a method for switching a viewing angle according to an embodiment of the present application. The method specifically comprises the following steps.
Step S801, receiving a first acquisition request for a target video at a first viewing angle sent by a user side.
Step S801 is the same as step S701.
Step S802, based on the first obtaining request, obtains a target video at a first view angle and a plurality of target videos at a third view angle.
Step S803, adjusting the resolution of the target video at each third viewing angle to obtain an adjusted target video at the third viewing angle, and taking the target video at the first viewing angle and the adjusted target video at the third viewing angle as a first video set.
In an optional embodiment, for the target video at each third viewing angle, the server may adjust the resolution of the target video at the third viewing angle according to the target distance corresponding to the third viewing angle, so as to obtain the adjusted target video at the third viewing angle.
In an optional embodiment, the resolution of the target video at each third view angle in the first video set is inversely related to the target distance corresponding to the third view angle, and the target distance corresponding to the third view angle is the distance between the camera at the third view angle and the camera at the first view angle.
For the sake of understanding, the resolution of the video captured by the camera is still: the description will be given with reference to the live view shown in fig. 1 by taking w × h × 3 as an example.
Now, assume that the first angle of view is the shooting angle of view corresponding to the camera 4. diWhich represents the distance from camera i to camera 4, i.e. the target distance corresponding to the shooting angle of view of camera i.
The server may calculate the target distance, e.g., d, for each camera of FIG. 1 separately1=d7=3d,d2=d6=2d,d3=d5=d,d4=0。
The server can determine the resolution of the target video under the shooting view angle corresponding to each camera according to the target distance corresponding to each camera.
For example, the resolution of the target video at the shooting angle of view of the camera 4 is: alpha is alpha0·w×α0H × 3 ═ w × h × 3; the resolution of the target video at the shooting angles of the cameras 3 and 5 is: alpha is alpha1·w×α1Hx 3 ═ α · w × α · hx 3; the resolution of the target video at the shooting angles of the cameras 2 and 6 is: alpha is alpha2·w×α2H.times.3; the resolution of the target video at the shooting angles of the cameras 1 and 7 is: alpha is alpha3·w×α3·h×3。
In this embodiment, the target distance may be calculated according to camera external parameters of two cameras.
The camera external parameters include positions and directions in a world coordinate system, and can be expressed as a camera rotation matrix and a camera displacement matrix.
The camera rotation matrix RcamCan be expressed as:
the above-mentioned camera displacement matrix tcamCan be expressed as:
the camera external parameters of the camera can be expressed as external parameter matrixes externsicscamI.e. by
Where cam denotes a camera, R is a direction in the world coordinate system, t denotes a position in the world coordinate system, X denotes an X-axis direction in the world coordinate system, Y denotes a Y-axis direction in the world coordinate system, and Z denotes a Z-axis direction in the world coordinate system.
the above-mentioned steps S802 to S803 are refinements of the above-mentioned step S702.
Through the steps S802 to S803, the resolution of the target video at the first view angle in the first video set is the highest, which can effectively reduce the total data amount corresponding to each target video in the first video set, which can effectively reduce the network resources required by the server to send the first video set to the user side, and improve the sending efficiency of the first video set.
Step S804, sending the first video set to the user side, so that the user side plays the target video at the first view angle in the first video set after receiving the first video set, and switches the currently played target video at the first view angle to the target video at the second view angle based on the first video set when receiving the view angle switching instruction for switching the first view angle to the second view angle.
Step S804 is the same as step S703.
In an alternative embodiment, according to the method shown in fig. 7, an embodiment of the present application further provides a method for switching a viewing angle. As shown in fig. 9, fig. 9 is an eighth flowchart illustrating a method for switching a viewing angle according to an embodiment of the present application. Specifically, the following steps, i.e., step S704 to step S706, are added.
Step S704, a second obtaining request for the target video at the second view angle sent by the user end is received, where the second obtaining request is sent when the user end receives the view angle switching instruction and the first video set includes the target video at the second view angle.
In this step, when the user side receives the view switching instruction, and when the currently played target video at the first view is switched to the target video at the second view included in the first video set, in order to improve the resolution of the played target video at the second view and improve the visual effect of the user, the user side may send a second acquisition request for the target video at the second view to the server.
Step S705, acquiring target videos under multiple visual angles based on a second acquisition request to obtain a second video set; the resolution of the target video at the second view in the second video set is higher than the resolution of the target video at the first view in the first video set.
In this step, after receiving the second acquisition request, the server may acquire the target video at the second view angle and the target videos at the multiple fourth view angles, and adjust the resolution of the target video at each fourth view angle to obtain the adjusted target video at the fourth view angle. And the server takes the target video under the second visual angle and the adjusted target video under the fourth visual angle as target videos included in the second video set.
The adjustment method of the resolution of the target video in the fourth view angle can refer to the adjustment method of the resolution of the target video in the third view angle, and is not specifically described here.
Step S706, sending the second video set to the user side, so that the user side switches the currently played target video at the second view angle to the target video at the second view angle in the second video set.
Through the steps S704 to S706, the server sends the target video at the second viewing angle with high resolution to the user side according to the received second acquisition request, so that the target video at the second viewing angle with lower resolution that is currently played can be switched to the target video at the second viewing angle with high resolution, the resolution of the target video that is currently played is improved, and the visual effect is improved.
For ease of understanding, a live video in which a target video is a certain anchor will be described with reference to fig. 10. Fig. 10 is a signaling diagram of a view switching process according to an embodiment of the present application.
Step S1001, the anchor terminal obtains an image of a first shooting scene to obtain a background picture group.
The anchor is not included in the first shooting scene. The anchor terminal is an electronic device used when the anchor performs live broadcasting.
Step S1002, the anchor sends the background group to the server.
Step S1003, after receiving the background image group, the server obtains a background depth image corresponding to each background image in the background image group by using a pre-trained high-precision depth estimation model, so as to obtain the background depth image group.
Step S1004, the server sends the background image group and the background depth image group to the user side.
In step S1005, the user end stores the received background image group and background depth image group.
The above-described steps S1001 to S1005 are performed when the anchor is not live. The background image group and the background depth image group may also be pre-stored in the user side.
Step S1006, the anchor terminal acquires video streams under multiple viewing angles when the anchor performs live broadcasting, and uses the video streams as target videos under multiple viewing angles.
The target video is a foreground image group and a foreground depth image group in a live broadcast scene.
Step 1007, the anchor terminal sends the acquired target videos under multiple viewing angles to the server.
In step S1008, the server stores the received target videos at the multiple viewing angles.
In step S1009, the user side sends a first acquisition request for the target video under the first view angle to the server.
Step S1010, after receiving the first obtaining request, the server obtains the stored target videos at the multiple viewing angles.
In step S1011, the server adjusts the resolution of the target video at each third viewing angle to obtain an adjusted target video at the third viewing angle, and uses the target video at the first viewing angle and the adjusted target video at the third viewing angle as a first video set.
The third viewing angle is any one of the plurality of viewing angles except the first viewing angle.
The resolution of the target video at each third view in the first video set is inversely related to the target distance corresponding to the third view.
Step S1012, the server sends the first video set to the user terminal.
In step S1013, after receiving the first video set, the user terminal plays the target video in the first video set at the first view angle.
In this step, the user side performs image fusion on the background image group stored therein and the foreground image group under the first view angle included in the first video set based on the background depth image group stored therein and the foreground depth image group under the first view angle included in the first video set, so as to obtain a fusion video under the first view angle; and playing the fused video.
In step S1014, when receiving the view switching instruction for switching the first view to the second view, the user side switches the currently played target video at the first view to the target video at the second view based on the first video set.
In step S1015, when the currently played target video at the first view angle is switched to the target video at the second view angle included in the first video set, the user side sends a second acquisition request for the target video at the second view angle to the server.
In step S1016, the server obtains target videos at multiple viewing angles based on the received second obtaining request, so as to obtain a second video set.
The resolution of the target video at each fourth view in the second video set is negatively correlated with the target distance corresponding to the fourth view, and the target distance corresponding to the fourth view is the distance between the camera at the fourth view and the camera at the second view. The fourth viewing angle is any one of the plurality of viewing angles other than the second viewing angle.
Step S1017, the server sends the second video set to the user side.
In step S1018, after receiving the second video set, the user side switches the currently played target video at the second view angle to the target video at the second view angle in the second video set.
Based on the same inventive concept, according to the above-mentioned viewing angle switching method provided by the embodiment of the present application, the embodiment of the present application further provides a viewing angle switching device. As shown in fig. 11, fig. 11 is a first structural schematic diagram of a viewing angle switching device according to an embodiment of the present application. The device is applied to a user side and specifically comprises the following modules.
A first sending module 1101, configured to send a first obtaining request for a target video at a first view angle to a server;
a first receiving module 1102, configured to receive a first video set of target videos under multiple views returned by a server, where the multiple views include a first view;
a playing module 1103, configured to play a target video in a first view of a first video set;
a first switching module 1104, configured to, when a view switching instruction for switching a first view to a second view is received, switch a currently played target video at the first view to a target video at the second view based on the first video set.
Optionally, the resolution of the target video at the first view in the first video set is higher than the resolution of the target video at a third view, and the third view is any view except the first view in the multiple views.
Optionally, the resolution of the target video at each third view angle in the first video set is negatively correlated with the target distance corresponding to the third view angle, and the target distance corresponding to the third view angle is the distance between the camera at the third view angle and the camera at the first view angle.
Optionally, the viewing angle switching device may further include:
the second sending module is used for sending a second acquisition request aiming at the target video under the second visual angle to the server when the target video under the first visual angle played currently is switched to the target video under the second visual angle included in the first video set;
the second receiving module is used for receiving a second video set of the target video under the plurality of visual angles, returned by the server; the resolution ratio of the target video at the second visual angle in the second video set is higher than that of the target video at the first visual angle in the first video set;
and the second switching module is used for switching the currently played target video under the second visual angle into the target video under the second visual angle in the second video set.
Optionally, the first switching module 1104 may be specifically configured to, when receiving a view switching instruction for switching the first view to the second view, if the first video set includes the target video at the second view, play the target video at the second view in the first video set based on a time point of the currently played target video at the first view.
Optionally, the first switching module 1104 may be specifically configured to, when receiving a view switching instruction for switching a first view to a second view, if the first video set does not include a target video at the second view, synthesize the target video at the second view based on the target video at each view in the first video set;
and playing the target video under the second visual angle based on the time point of the target video under the first visual angle which is played currently.
Optionally, if the target video is a live video, each target video in the first video set is a foreground image group and a foreground depth image group in a live scene;
the playing module 1103 may be specifically configured to obtain a preset background image group and a preset background depth image group; acquiring a foreground image group and a foreground depth image group under a first visual angle in a first video set; performing image fusion on the preset background image group and the foreground image group under the first visual angle based on the preset background depth image group and the foreground depth image group under the first visual angle to obtain a first fusion video under the first visual angle; playing the first fusion video;
the first switching module 1104 may be specifically configured to obtain a preset background map set and a preset background depth map set; acquiring a foreground image group and a foreground depth image group under a second visual angle based on the first video set; performing image fusion on the preset image group and the foreground image group under the second visual angle based on the preset background depth image group and the foreground depth image group under the second visual angle to obtain a second fusion video under the second visual angle; and playing the second fused video.
Based on the same inventive concept, according to the above-mentioned viewing angle switching method provided by the embodiment of the present application, the embodiment of the present application further provides a viewing angle switching device. As shown in fig. 12, fig. 12 is a schematic view of a second structure of a viewing angle switching device according to an embodiment of the present application. The device is applied to the server and specifically comprises the following modules.
A third receiving module 1201, configured to receive a first acquisition request for a target video at a first viewing angle, where the first acquisition request is sent by a user side;
a first obtaining module 1202, configured to obtain, based on a first obtaining request, a target video under multiple view angles to obtain a first video set, where the multiple view angles include a first view angle;
the third sending module 1203 is configured to send the first video set to the user side, so that the user side plays the target video in the first view angle in the first video set after receiving the first video set, and when receiving a view switching instruction for switching the first view angle to the second view angle, switches the currently played target video in the first view angle to the target video in the second view angle based on the first video set.
Optionally, the resolution of the target video at the first view angle in the first video set is higher than the resolution of the target video at a third view angle, where the third view angle is any view angle of the multiple view angles except the first view angle;
the first obtaining module 1202 may be specifically configured to obtain, based on the first obtaining request, a target video at a first view angle and a plurality of target videos at a third view angle; and adjusting the resolution of the target video at each third visual angle to obtain an adjusted target video at the third visual angle, and taking the target video at the first visual angle and the adjusted target video at the third visual angle as a first video set.
Optionally, the resolution of the target video at each third view angle in the first video set is negatively correlated with the target distance corresponding to the third view angle, and the target distance corresponding to the third view angle is the distance between the camera at the third view angle and the camera at the first view angle.
Optionally, the viewing angle switching device may further include:
the fourth receiving module is configured to receive a second acquisition request, sent by the user side, for the target video at the second view angle, where the second acquisition request is sent by the user side when the view angle switching instruction is received and the first video set includes the target video at the second view angle;
the second acquisition module is used for acquiring target videos under multiple visual angles based on a second acquisition request to obtain a second video set; the resolution ratio of the target video at the second visual angle in the second video set is higher than that of the target video at the first visual angle in the first video set;
and the fourth sending module is used for sending the second video set to the user side so that the user side switches the currently played target video at the second view angle into the target video at the second view angle in the second video set.
Optionally, if the target video is a live video, each target video in the first video set is a foreground image group and a foreground depth image group in a live scene.
By the device provided by the embodiment of the application, when the client requests the server to acquire the target video at the first visual angle, the server sends the first video set including the target videos at the multiple visual angles to the client, that is, when the server sends the target video at the first visual angle to the client, the server also sends the target videos at other visual angles to the client, so that when the client receives a visual angle switching instruction for switching the first visual angle to the second visual angle, the client can directly play the target video at the second visual angle according to the received first video set. In the visual angle switching process of the target video, the user side does not need to request the server for acquiring the target video under the second visual angle again, the acquisition time of the target video under the second visual angle is effectively shortened, the time required by the visual angle switching process is shortened, and the probability of the pause phenomenon is reduced.
Based on the same inventive concept, according to the above-mentioned method for switching the viewing angle provided by the embodiment of the present application, the embodiment of the present application further provides a user end, as shown in fig. 13, which includes a processor 1301, a communication interface 1302, a memory 1303 and a communication bus 1304, wherein the processor 1301, the communication interface 1302 and the memory 1303 complete mutual communication through the communication bus 1304,
a memory 1303 for storing a computer program;
the processor 1301 is configured to implement the following steps when executing the program stored in the memory 1303:
sending a first acquisition request aiming at a target video under a first visual angle to a server;
receiving a first video set of target videos under a plurality of visual angles returned by a server, wherein the plurality of visual angles comprise a first visual angle;
playing a target video under a first visual angle in a first video set;
when a visual angle switching instruction for switching a first visual angle to a second visual angle is received, switching a currently played target video at the first visual angle to a target video at the second visual angle based on the first video set.
Based on the same inventive concept, according to the method for switching the viewing angle provided by the foregoing embodiments of the present application, the embodiments of the present application further provide a server, as shown in fig. 14, including a processor 1401, a communication interface 1402, a memory 1403 and a communication bus 1404, wherein the processor 1401, the communication interface 1402 and the memory 1403 complete communication with each other through the communication bus 1404,
a memory 1403 for storing a computer program;
the processor 1401, when executing the program stored in the memory 1403, implements the following steps:
receiving a first acquisition request aiming at a target video under a first visual angle, which is sent by a user side;
acquiring a target video under a plurality of visual angles based on a first acquisition request to obtain a first video set, wherein the plurality of visual angles comprise a first visual angle;
and sending the first video set to the user side so that the user side plays the target video under the first visual angle in the first video set after receiving the first video set, and switching the currently played target video under the first visual angle to the target video under the second visual angle based on the first video set when receiving a visual angle switching instruction for switching the first visual angle to the second visual angle.
Through the user side and the server provided by the embodiment of the application, when the user side requests the server to acquire the target video at the first visual angle, the server sends the first video set including the target videos at the multiple visual angles to the user side, namely, the server sends the target video at the first visual angle to the user side and simultaneously sends the target videos at other visual angles to the user side, so that when the user side receives a visual angle switching instruction for switching the first visual angle into the second visual angle, the user side can directly play the target video at the second visual angle according to the received first video set. In the visual angle switching process of the target video, the user side does not need to request the server for acquiring the target video under the second visual angle again, the acquisition time of the target video under the second visual angle is effectively shortened, the time required by the visual angle switching process is shortened, and the probability of the pause phenomenon is reduced.
The communication bus mentioned above for the client/server can be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus, etc. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the user terminal/server and other equipment.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
Based on the same inventive concept, according to the above-mentioned view switching method provided in the embodiments of the present application, an embodiment of the present application further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the view switching method in any of the above-mentioned embodiments is implemented.
Based on the same inventive concept, according to the above-mentioned perspective switching method provided in the embodiments of the present application, the embodiments of the present application further provide a computer program product containing instructions, which, when run on a computer, causes the computer to execute any of the perspective switching methods described in the above-mentioned embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for embodiments such as the apparatus, the user side, the server, the computer readable storage medium, and the computer program product, since they are substantially similar to the method embodiments, the description is simple, and the relevant points can be referred to the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application are included in the protection scope of the present application.
Claims (13)
1. A method for switching visual angles is applied to a user side, and comprises the following steps:
sending a first acquisition request aiming at a target video under a first visual angle to a server;
receiving a first video set of target videos under a plurality of views returned by the server, wherein the plurality of views comprise the first view;
playing a target video under the first visual angle in the first video set;
and when a visual angle switching instruction for switching the first visual angle to the second visual angle is received, switching the currently played target video at the first visual angle to the target video at the second visual angle based on the first video set.
2. The method of claim 1, wherein a resolution of the target video in the first view in the first set of videos is higher than a resolution of the target video in a third view, the third view being any view of the plurality of views other than the first view.
3. The method of claim 2, wherein the resolution of the target video at each third view in the first video set is inversely related to a target distance corresponding to the third view, and the target distance corresponding to the third view is a distance between the camera at the third view and the camera at the first view.
4. A method according to claim 2 or 3, characterized in that the method further comprises:
when a target video under a first currently played view angle is switched to a target video under a second view angle included in the first video set, sending a second acquisition request aiming at the target video under the second view angle to the server;
receiving a second video set of the target video at a plurality of visual angles returned by the server; the resolution of the target video at the second view angle in the second video set is higher than the resolution of the target video at the first view angle in the first video set;
and switching the currently played target video under the second visual angle into the target video under the second visual angle in the second video set.
5. The method according to claim 1, wherein the step of switching the currently played target video from the first view to the second view based on the first video set when receiving a view switching instruction for switching the first view to the second view comprises:
when a view switching instruction for switching the first view to a second view is received, if the first video set includes the target video at the second view, the target video at the second view in the first video set is played based on a time point of the currently played target video at the first view.
6. The method according to claim 1, wherein the step of switching the currently played target video from the first view to the second view based on the first video set when receiving a view switching instruction for switching the first view to the second view comprises:
when a view switching instruction for switching the first view to a second view is received, if the target video under the second view is not included in the first video set, synthesizing the target video under the second view based on the target video under each view in the first video set;
and playing the target video under the second visual angle based on the currently played time point of the target video under the first visual angle.
7. The method of claim 1, wherein if the target video is a live video, each target video in the first video set is a foreground group and a foreground depth group in a live scene;
the step of playing the target video in the first video set under the first view angle includes:
acquiring a preset background image group and a preset background depth image group;
acquiring a foreground image group and a foreground depth image group under a first visual angle in the first video set;
performing image fusion on the preset background image group and the foreground image group under the first visual angle based on the preset background depth image group and the foreground depth image group under the first visual angle to obtain a first fusion video under the first visual angle;
playing the first fused video;
the step of switching the currently played target video at the first view angle to the target video at the second view angle based on the first video set includes:
acquiring the preset background image group and the preset background depth image group;
acquiring a foreground image group and a foreground depth image group under a second visual angle based on the first video set;
performing image fusion on the preset image group and the foreground image group under the second visual angle based on the preset background depth image group and the foreground depth image group under the second visual angle to obtain a second fusion video under the second visual angle;
and playing the second fusion video.
8. A method for switching a view angle is applied to a server, and comprises the following steps:
receiving a first acquisition request aiming at a target video under a first visual angle, which is sent by a user side;
acquiring a target video under a plurality of visual angles based on the first acquisition request to obtain a first video set, wherein the plurality of visual angles comprise the first visual angle;
and sending the first video set to the user side, so that the user side plays the target video under the first visual angle in the first video set after receiving the first video set, and switches the currently played target video under the first visual angle to the target video under the second visual angle based on the first video set when receiving a visual angle switching instruction for switching the first visual angle to the second visual angle.
9. A visual angle switching device is applied to a user side, and comprises:
the first sending module is used for sending a first acquisition request aiming at a target video under a first visual angle to a server;
a first receiving module, configured to receive a first video set of target videos under multiple views returned by the server, where the multiple views include the first view;
a playing module, configured to play a target video in the first video set at the first view angle;
and the first switching module is used for switching the currently played target video under the first visual angle to the target video under the second visual angle on the basis of the first video set when a visual angle switching instruction for switching the first visual angle to the second visual angle is received.
10. A visual angle switching device is applied to a server, and the device comprises:
the third receiving module is used for receiving a first acquisition request aiming at the target video under the first visual angle, which is sent by the user side;
a first obtaining module, configured to obtain, based on the first obtaining request, a target video at multiple view angles to obtain a first video set, where the multiple view angles include the first view angle;
a third sending module, configured to send the first video set to the user side, so that the user side plays the target video in the first view in the first video set after receiving the first video set, and when receiving a view switching instruction for switching the first view to a second view, switches the currently played target video in the first view to the target video in the second view based on the first video set.
11. A user side is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 7 when executing a program stored in the memory.
12. A server is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing the communication between the processor and the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of claim 8 when executing a program stored in the memory.
13. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method steps of any one of claims 1 to 7 or 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111194139.0A CN113938711A (en) | 2021-10-13 | 2021-10-13 | Visual angle switching method and device, user side, server and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111194139.0A CN113938711A (en) | 2021-10-13 | 2021-10-13 | Visual angle switching method and device, user side, server and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113938711A true CN113938711A (en) | 2022-01-14 |
Family
ID=79278922
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111194139.0A Pending CN113938711A (en) | 2021-10-13 | 2021-10-13 | Visual angle switching method and device, user side, server and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113938711A (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109218848A (en) * | 2017-07-06 | 2019-01-15 | 阿里巴巴集团控股有限公司 | View angle switch method, apparatus, equipment and the computer storage medium of video flowing |
CN110351607A (en) * | 2018-04-04 | 2019-10-18 | 优酷网络技术(北京)有限公司 | A kind of method, computer storage medium and the client of panoramic video scene switching |
CN111355967A (en) * | 2020-03-11 | 2020-06-30 | 叠境数字科技(上海)有限公司 | Video live broadcast processing method, system, device and medium based on free viewpoint |
CN111447462A (en) * | 2020-05-20 | 2020-07-24 | 上海科技大学 | Video live broadcast method, system, storage medium and terminal based on visual angle switching |
CN111447461A (en) * | 2020-05-20 | 2020-07-24 | 上海科技大学 | Synchronous switching method, device, equipment and medium for multi-view live video |
CN111698520A (en) * | 2020-06-24 | 2020-09-22 | 北京奇艺世纪科技有限公司 | Multi-view video playing method, device, terminal and storage medium |
CN112423006A (en) * | 2020-11-09 | 2021-02-26 | 珠海格力电器股份有限公司 | Live broadcast scene switching method, device, equipment and medium |
CN113259770A (en) * | 2021-05-11 | 2021-08-13 | 北京奇艺世纪科技有限公司 | Video playing method, device, electronic equipment, medium and product |
CN113256491A (en) * | 2021-05-11 | 2021-08-13 | 北京奇艺世纪科技有限公司 | Free visual angle data processing method, device, equipment and storage medium |
CN113301351A (en) * | 2020-07-03 | 2021-08-24 | 阿里巴巴集团控股有限公司 | Video playing method and device, electronic equipment and computer storage medium |
-
2021
- 2021-10-13 CN CN202111194139.0A patent/CN113938711A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109218848A (en) * | 2017-07-06 | 2019-01-15 | 阿里巴巴集团控股有限公司 | View angle switch method, apparatus, equipment and the computer storage medium of video flowing |
CN110351607A (en) * | 2018-04-04 | 2019-10-18 | 优酷网络技术(北京)有限公司 | A kind of method, computer storage medium and the client of panoramic video scene switching |
CN111355967A (en) * | 2020-03-11 | 2020-06-30 | 叠境数字科技(上海)有限公司 | Video live broadcast processing method, system, device and medium based on free viewpoint |
CN111447462A (en) * | 2020-05-20 | 2020-07-24 | 上海科技大学 | Video live broadcast method, system, storage medium and terminal based on visual angle switching |
CN111447461A (en) * | 2020-05-20 | 2020-07-24 | 上海科技大学 | Synchronous switching method, device, equipment and medium for multi-view live video |
CN111698520A (en) * | 2020-06-24 | 2020-09-22 | 北京奇艺世纪科技有限公司 | Multi-view video playing method, device, terminal and storage medium |
CN113301351A (en) * | 2020-07-03 | 2021-08-24 | 阿里巴巴集团控股有限公司 | Video playing method and device, electronic equipment and computer storage medium |
CN112423006A (en) * | 2020-11-09 | 2021-02-26 | 珠海格力电器股份有限公司 | Live broadcast scene switching method, device, equipment and medium |
CN113259770A (en) * | 2021-05-11 | 2021-08-13 | 北京奇艺世纪科技有限公司 | Video playing method, device, electronic equipment, medium and product |
CN113256491A (en) * | 2021-05-11 | 2021-08-13 | 北京奇艺世纪科技有限公司 | Free visual angle data processing method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021175055A1 (en) | Video processing method and related device | |
JP6471777B2 (en) | Image processing apparatus, image processing method, and program | |
CN107040794A (en) | Video broadcasting method, server, virtual reality device and panoramic virtual reality play system | |
US20170064174A1 (en) | Image shooting terminal and image shooting method | |
US11539983B2 (en) | Virtual reality video transmission method, client device and server | |
CN104301769B (en) | Method, terminal device and the server of image is presented | |
CN111669518A (en) | Multi-angle free visual angle interaction method and device, medium, terminal and equipment | |
CN107835435B (en) | Event wide-view live broadcasting equipment and associated live broadcasting system and method | |
CN111669567A (en) | Multi-angle free visual angle video data generation method and device, medium and server | |
CN111669561A (en) | Multi-angle free visual angle image data processing method and device, medium and equipment | |
CN111669564A (en) | Image reconstruction method, system, device and computer readable storage medium | |
CN110351606A (en) | Media information processing method, relevant device and computer storage medium | |
US11490066B2 (en) | Image processing apparatus that obtains model data, control method of image processing apparatus, and storage medium | |
CN111667438A (en) | Video reconstruction method, system, device and computer readable storage medium | |
JP2015050572A (en) | Information processing device, program, and information processing method | |
CN109889736B (en) | Image acquisition method, device and equipment based on double cameras and multiple cameras | |
US20220353484A1 (en) | Information processing apparatus, information processing method, and program | |
CN113938711A (en) | Visual angle switching method and device, user side, server and storage medium | |
CN111669569A (en) | Video generation method and device, medium and terminal | |
CN111669568A (en) | Multi-angle free visual angle interaction method and device, medium, terminal and equipment | |
CN111669570A (en) | Multi-angle free visual angle video data processing method and device, medium and equipment | |
CN111669604A (en) | Acquisition equipment setting method and device, terminal, acquisition system and equipment | |
CN114450939B (en) | Apparatus and method for generating and rendering immersive video | |
CN115604528A (en) | Fisheye image compression method, fisheye video stream compression method and panoramic video generation method | |
JP5962692B2 (en) | Terminal device and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220114 |