CN113301351B - Video playing method and device, electronic equipment and computer storage medium - Google Patents

Video playing method and device, electronic equipment and computer storage medium Download PDF

Info

Publication number
CN113301351B
CN113301351B CN202010632481.3A CN202010632481A CN113301351B CN 113301351 B CN113301351 B CN 113301351B CN 202010632481 A CN202010632481 A CN 202010632481A CN 113301351 B CN113301351 B CN 113301351B
Authority
CN
China
Prior art keywords
video data
video
switching instruction
image acquisition
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010632481.3A
Other languages
Chinese (zh)
Other versions
CN113301351A (en
Inventor
董玉娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010632481.3A priority Critical patent/CN113301351B/en
Publication of CN113301351A publication Critical patent/CN113301351A/en
Application granted granted Critical
Publication of CN113301351B publication Critical patent/CN113301351B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/437Interfacing the upstream path of the transmission network, e.g. for transmitting client requests to a VOD server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44222Analytics of user selections, e.g. selection of programs or purchase activity

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the application provides a video playing method, a video playing device, electronic equipment and a computer storage medium, wherein the video playing method comprises the following steps: receiving first video data returned in response to a video switching instruction, wherein the first video data is determined according to second video data obtained by image acquisition of a target area by image acquisition equipment at a plurality of positions; and playing the first video data indicated by the video switching instruction. Through the scheme of this embodiment, can carry out automatic video picture guide according to user's video switching instruction, the picture that the user can watch is various, has avoided the problem that the user watches single content in prior art, has improved user experience.

Description

Video playing method and device, electronic equipment and computer storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a video playing method and device, electronic equipment and a computer storage medium.
Background
With the development of the entertainment industry, more and more users watch videos, live broadcasts and the like, and even more and more people choose to watch games or shows on the spot in order to obtain better experiences, for example, to better sense the atmosphere on the spot, to experience the sound effects on the spot and the like.
However, most of the current content for users to watch is fixed and single, which results in poor user experience. For example, when watching a video or a live broadcast, only the content pushed by the video server can be watched passively, resulting in poor user experience; when a performance or a game is watched on site, the user is limited by the seat, and can only watch in a certain fixed position, so that the user experience is poor.
Disclosure of Invention
In view of the above, embodiments of the present application provide a video playing scheme to at least partially solve the above problem.
According to a first aspect of an embodiment of the present application, a video playing method is provided, including: receiving first video data returned in response to a video switching instruction, wherein the first video data is determined according to second video data obtained by image acquisition of a target area by image acquisition equipment at a plurality of positions; and playing the first video data indicated by the video switching instruction.
According to a second aspect of the embodiments of the present application, there is provided a video playing method, including: receiving a plurality of second video data, wherein the second video data are a plurality of video data obtained by image acquisition of a target area by image acquisition equipment at a plurality of positions correspondingly; determining a plurality of first video data having different viewing perspectives or different tracking subjects from the plurality of second video data; and according to the received video switching instruction, determining corresponding first video data from the plurality of first video data, and sending the corresponding first video data.
According to a third aspect of the embodiments of the present application, there is provided a video playback apparatus, including: the first video receiving module is used for receiving first video data returned by responding to a video switching instruction, and the first video data is determined according to second video data obtained by image acquisition of a target area by image acquisition equipment at a plurality of positions; and the video playing module is used for playing the first video data indicated by the video switching instruction.
According to a fourth aspect of embodiments of the present application, there is provided a video playback apparatus, including: the second video receiving module is used for receiving a plurality of second video data, wherein the second video data are a plurality of video data which are obtained by image acquisition of a target area by image acquisition equipment at a plurality of positions correspondingly; the processing module is used for determining a plurality of first video data with different viewing visual angles or different tracking subjects according to the plurality of second video data, and determining corresponding first video data from the plurality of first video data according to a received video switching instruction; and the sending module is used for sending the corresponding first video data.
According to a fifth aspect of embodiments of the present application, there is provided an electronic apparatus, including: the input equipment is used for receiving the video switching instruction; the processor is used for receiving first video data returned in response to the video switching instruction, and the first video data is determined according to second video data obtained by image acquisition of a target area by image acquisition equipment at multiple positions; the display device is used for playing the first video data indicated by the video switching instruction.
According to a sixth aspect of embodiments of the present application, there is provided an electronic apparatus, including: the input equipment is used for receiving a plurality of second video data, wherein the second video data are a plurality of video data which are obtained by carrying out image acquisition on a target area by image acquisition equipment at a plurality of positions and correspondingly; a processor, configured to determine, according to the plurality of second video data, a plurality of first video data having different viewing perspectives or different tracking subjects, and configured to determine, according to a received video switching instruction, corresponding first video data from the plurality of first video data; and the output equipment is used for sending the corresponding first video data.
According to a seventh aspect of embodiments of the present application, there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the video playback method as described above.
According to the video playing scheme provided by the embodiment of the application, a user can independently switch first video data displayed by the client by inputting a video switching instruction to the client, and the first video data is determined according to second video data obtained by image acquisition of a target area according to image acquisition equipment at multiple positions.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the description below are only some embodiments described in the embodiments of the present application, and other drawings can be obtained by those skilled in the art according to these drawings.
Fig. 1 is a schematic structural diagram of a video playing system according to a first embodiment of the present application;
fig. 2A is a schematic flowchart of a video playing method according to a second embodiment of the present application;
FIG. 2B is a schematic view of a scene of a video playing method in the embodiment shown in FIG. 2A;
fig. 3A is a schematic flowchart of a video playing method according to a third embodiment of the present application;
fig. 3B is a schematic diagram of a target area according to a third embodiment of the present application;
fig. 3C is a schematic view of a video playing interface in the third embodiment of the present application;
fig. 3D is a schematic view of another video playing interface according to the third embodiment of the present application;
fig. 4 is a schematic flowchart of a video playing method according to a fourth embodiment of the present application;
fig. 5 is a schematic structural diagram of a video playback device according to a fifth embodiment of the present application;
fig. 6 is a schematic structural diagram of a video playback device according to a sixth embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to a seventh embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an eighth embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present application, the technical solutions in the embodiments of the present application will be described clearly and completely below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application shall fall within the scope of the protection of the embodiments in the present application.
The following further describes specific implementations of embodiments of the present application with reference to the drawings of the embodiments of the present application.
Example one
Referring to fig. 1, a schematic structural diagram of a video playing system of the present application is shown, which includes, as shown in the figure: the system comprises an image acquisition device 101, a server 102 and a client 103.
The image acquisition device 101 is located at multiple positions of the target area, and is configured to acquire multiple second video data corresponding to the target area.
For example, as shown in fig. 1, the image capturing device may include a plurality of image capturing devices, namely, an image capturing device 1, an image capturing device 2, an image capturing device 3, and an image capturing device 4 \8230, where the plurality of image capturing devices are located at different positions. Of course, the image capturing device may also include one image capturing device, and image capturing may be performed by different image capturing devices on one image capturing device, which is not limited in this embodiment.
For example, the image capturing devices may be distributed at different angles of the target area, and the distances from the target area may be different, and 360-degree omni-directional image capturing may be performed.
The image capture device may include a single lens camera, a fisheye lens camera, a multi-lens camera, and the like.
The image acquisition device can send the second video data obtained by image acquisition to the server.
Specifically, the image capturing device may transmit the captured second video data to the server in real time by means of wired transmission or wireless transmission.
A server 102, configured to receive a plurality of second video data; and determining a plurality of first video data having different viewing perspectives or different tracked subjects from the plurality of second video data; the server can also be used for receiving a video switching instruction.
In this embodiment, the server may specifically be an electronic device with digital processing capability, such as a server or a device in a director room, which is not limited in this embodiment.
Specifically, the server may process the second video data with big data computing power, and determine a plurality of first video data with different viewing angles or different tracking subjects.
For example, in any embodiment of the present application, determining, according to the plurality of second video data, a plurality of first video data having different viewing angles or different tracked subjects may specifically be: and processing the plurality of second video data by adopting big data computing power to obtain at least one first video data.
For example, in any embodiment of the present application, the determining, according to the received video switching instruction, corresponding first video data from a plurality of first video data, and sending the corresponding first video data includes: and inquiring and sending first video data corresponding to the video switching instruction in real time according to the video switching instruction.
In any embodiment of the present application, the determining, according to the plurality of second video data, a plurality of first video data having different viewing perspectives or different tracking subjects includes at least one of: synthesizing a plurality of second video data acquired at the same position and using the same viewing angle according to a time sequence, and acquiring first video data corresponding to the viewing angle; and synthesizing a plurality of second video data containing the same tracking subject according to the relative position relationship and time sequence between the image acquisition positions, and obtaining first video data corresponding to the tracking subject.
For example, in any embodiment of the present application, the determining, according to the received video switching instruction, corresponding first video data from a plurality of first video data, and sending the corresponding first video data includes: according to the watching visual angle information or tracking subject information carried in the video switching instruction, determining first video data matched with the watching visual angle information or first video data matched with the tracking subject information from the plurality of first video data; and transmitting the determined first video data.
The client 103 is configured to receive a user instruction (e.g., a video switching instruction), and to receive first video data returned in response to the video switching instruction, and play the first video data indicated by the video switching instruction to the user.
In this embodiment, the client may specifically be a device capable of acquiring a user instruction and playing video data, such as a PAD, a mobile phone, and the like; of course, the client may also be a combination of a plurality of components, for example, the client may include VR glasses, an operation panel disposed on the seat, and a processor disposed in the seat, and the like, which is not limited in this application. The client may correspond to the user one to one, or a plurality of users, for example, four users, may share one client, which is not limited in this embodiment.
For example, a control panel may be disposed on an armrest of each seat, and an adaptive VR glasses, and when video data is played to a user through the VR glasses, a video switching instruction may be input to the control panel, so as to control a viewing angle of first video data displayed by the VR glasses, a specific position of a stage corresponding to the first video data, a person in a picture of the first video data, and the like.
Specifically, when a sports game is watched, a video switching instruction can be input to the control panel to switch a picture angle, a picture position and the like corresponding to the played first video data; or may play first video data corresponding to a particular athlete. Similarly, video data corresponding to an actor may also be viewed, or the actor's performance may be viewed at different angles or distances.
The user can input a video switching instruction to the client, so that the first video data displayed by the client can be switched autonomously, the first video data is determined according to the second video data obtained by acquiring images of the target area by the image acquisition equipment at multiple positions, therefore, the second video data obtained by the image acquisition equipment at multiple positions can be switched by the video switching instruction, namely, automatic video picture guide is carried out according to the video switching instruction of the user, the pictures which can be watched by the user are various, the problem that the watching content of the user is single in the prior art is avoided, and the user experience is improved.
For example, the users in any embodiment of the present application may include one or more users, the corresponding clients may also include one or more clients, and the clients may correspond to the users one to one, or multiple users may share one client, for example, a user watching an event on the spot in two or four adjacent seats may share one client, which is not limited in this embodiment.
For example, in any embodiment of the present application, before receiving the first video data returned in response to the video switching instruction, the client is further configured to: displaying a plurality of candidate video windows in a video playing interface, wherein the video windows are used for displaying video data corresponding to a watching visual angle or displaying video data corresponding to a tracking main body; determining a video switching instruction according to the video window corresponding to the received selection operation, wherein the video switching instruction comprises selected viewing angle information or tracking subject information; and sending the video switching instruction to a server side so that the server side returns the first video data corresponding to the video switching instruction.
For example, in any embodiment of the present application, before receiving the first video data returned in response to the video switching instruction, the client is further configured to: amplifying the selected video window according to the selection operation; the playing the first video data indicated by the video switching instruction includes: and playing the first video data corresponding to the video switching instruction in the enlarged video window.
For example, in any embodiment of the present application, the presenting a plurality of candidate video windows in a video playing interface includes: according to the received recommendation information, a plurality of candidate video windows are determined from a plurality of preset video windows, and the candidate video windows are displayed in a video playing interface.
For example, in any embodiment of the present application, the playing the first video data indicated by the video switching instruction includes: determining type information of the first video data according to the video switching instruction, wherein the type information comprises an augmented reality video type or a virtual reality video type; performing corresponding augmented reality processing or virtual reality processing on the first video data according to the determined type information, and obtaining processed first video data; and playing the processed first video data.
For example, in any embodiment of the present application, the receiving first video data returned in response to a video switching instruction includes: and receiving first video data returned in response to the video switching instruction in the video live broadcasting process.
Through the scheme provided by the embodiment, a user can input a video switching instruction to the client, so that the first video data displayed by the client can be switched autonomously, the first video data is determined according to the second video data obtained by acquiring the image of the target area by the image acquisition equipment at multiple positions, therefore, the second video data obtained by the image acquisition equipment at multiple positions can be switched by the video switching instruction, namely, automatic video picture guide is carried out according to the video switching instruction of the user, the pictures which can be watched by the user are various, the problem that the watching content of the user is single in the prior art is solved, and the user experience is improved.
Example two
Referring to fig. 2A, a schematic flow chart of a video playing method according to the present application is shown, where the scheme provided in this embodiment is applied to a client, as shown in the figure, the scheme includes:
s201, receiving first video data returned in response to a video switching instruction, wherein the first video data are determined according to second video data obtained by image acquisition of a target area by image acquisition equipment at a plurality of positions.
In this embodiment, the target area may be an area where image acquisition is performed, for example, in a live scene, the target area may be a main broadcast or a screen of the main broadcast; in a recorded and broadcast scene, the target area can be a recorded and broadcast shooting site; in a live viewing event, the target area may be a playing field.
In this embodiment, the image capturing device may capture an image of the target area in different capturing manners, for example, capture an image at a fixed position or capture an image during a movement. When the same object or the same scene of the target area is subjected to image acquisition in different acquisition modes, the obtained second video data are different.
The first video data may be determined according to the second video data, and the first video data may specifically be video data obtained by combining a plurality of second video data, or video data obtained by cutting or transforming the second video data. For example, video data captured by two image capturing devices may be combined to obtain a first video data; or, the plurality of second video data may be subjected to 3D processing to obtain first video data; alternatively, a plurality of second video data may be cropped or the like so that a certain tracking subject is located in the screen, resulting in the first video data. It is to be understood that the above description is intended to be illustrative, and not restrictive.
In addition, the first video data may include a plurality of pieces, and a viewing angle or a tracking subject of the plurality of pieces of first video data may be different.
For example, in a live scene, image capture may be performed on the anchor by using a plurality of image capture devices to obtain a plurality of second video data, and determining the first video data according to the plurality of second video data may be: reconstructing the plurality of second video data to obtain first video data corresponding to a complete live scene; and cutting, splicing and the like are carried out on the plurality of second video data to obtain the first video data of which the face or the hand of the anchor is positioned in the picture. Of course, the foregoing is merely illustrative and is not intended to be a limitation of the present application.
For another example, when the event is watched on the spot, a plurality of image capturing devices may be used to capture images of the event field to obtain a plurality of second video data, and the first video data determined according to the plurality of second video data may be: first video data corresponding to a competition field; first video data of a certain contestant in a picture; the first video data corresponding to the whole competition field. Of course, the foregoing is merely illustrative and is not intended to be a limitation of the present application.
S202, playing the first video data indicated by the video switching instruction.
The video switching instruction can be received before the first video data is played, or can be received during the playing of the first video data. For example, a person desired to be viewed may be set in advance by a video switching instruction, and then first video data corresponding to the person desired to be viewed may be directly obtained and played; alternatively, the first video data a may be switched to the first video data B according to the received video switching instruction during the playing process of the first video data a.
The following describes an example of the usage scenario of the present application.
Referring to fig. 2B, a stage (i.e., a target area) is shown, on which a plurality of cameras (i.e., image capturing apparatuses) are arranged at different positions, so that the stage is image-captured at a plurality of positions, and a plurality of second video data is obtained. The second video data may be sent to the server, so as to determine the first video data according to a plurality of second video data, for example, the first video data a corresponding to an angle facing a stage may be obtained, and the first video data B corresponding to a certain actor may also be obtained.
The client can receive the video switching instruction and send the video switching instruction, so that the first video data A or the first video data B returned in response to the video switching instruction and the like are obtained.
And the client plays the first video data indicated by the video switching instruction, for example, plays the first video data A indicated by the video switching instruction.
According to the scheme provided by the embodiment, first video data returned in response to a video switching instruction can be received, and the first video data is determined according to second video data obtained by image acquisition of a target area by image acquisition equipment at multiple positions; and playing the first video data indicated by the video switching instruction, so that the second video data obtained by the image acquisition equipment at multiple positions can be switched through the video switching instruction, namely, automatic video picture guide playing is carried out according to the video switching instruction of the user, the pictures which can be watched by the user are various, the problem that the watching content of the user is single in the prior art is avoided, and the user experience is improved.
The video playing method of the present embodiment may be executed by any suitable electronic device with data processing capability, including but not limited to: a server, a mobile terminal (such as a mobile phone, a PAD and the like), a PC and the like.
EXAMPLE III
Referring to fig. 3, which shows a schematic flow chart of a video playing method of the present application, the scheme provided in this embodiment is applied to a client, and as shown in the figure, the scheme includes:
s301, displaying a plurality of candidate video windows in a video playing interface, wherein the video windows are used for displaying video data corresponding to a watching visual angle or displaying video data corresponding to a tracking main body.
In this embodiment, the video playing interface may be a display screen of a mobile phone, a PAD, a PC, or the like, and may also be a display interface of VR, AR glasses, or the like, or an interface displayed by a display screen panel on a seat, which is not limited in this embodiment.
The display modes of the video windows of the multiple candidate video windows may be multiple, for example, the multiple video windows may be displayed in manners such as squared, or the video windows may be displayed in manners such as lists, which is not limited in this embodiment.
By displaying a plurality of candidate video windows on the video playing interface, a user can know the relevant information of the video data corresponding to each video window through the displayed video windows, such as the tracking subject, the viewing angle and the like in the picture corresponding to the video data.
When the video window is displayed, the number of people watching the video window, the heat and the like can be displayed for the user to refer to. Of course, the foregoing is merely illustrative and is not intended to be a limitation of the present application.
In the embodiment of the present application, the video data of the viewing perspective may be, for example: video data corresponding to the viewing angle of the stage being faced, video data corresponding to the viewing angle of the stage being overlooked, and the like; the video data corresponding to the tracked subject may be video data corresponding to a certain athlete, actor, or the like. Of course, the above description is merely illustrative and not restrictive of the present application.
In order to save processor resources, a video frame corresponding to a viewing angle or a video frame corresponding to a tracking subject can be displayed in the video window; in order to obtain better display effect, the video data of the viewing angle or the video data of the corresponding tracking subject can be played in the video window. Of course, the foregoing is merely illustrative and is not intended to be a limitation of the present application.
For example, in this embodiment, the presenting a plurality of candidate video windows in a video playing interface includes: according to the received recommendation information, a plurality of candidate video windows are determined from a plurality of preset video windows, and the candidate video windows are displayed in a video playing interface. Therefore, the displayed candidate video windows can be more suitable for the requirements of the user.
The client side can comprise a plurality of pieces of preset video window information, and after receiving the recommendation information, the client side can determine a relatively popular viewing angle or track the main body from the recommendation information, so that candidate video windows can be determined from the preset video windows according to the recommendation information and displayed, and the candidate video windows displayed to the user can better meet the requirements of the user.
Further, in this embodiment, the client may establish a video window first, and after receiving the recommendation information, determine viewing angle information or tracking subject information corresponding to the video window according to the recommendation information.
S302, determining a video switching instruction according to the video window corresponding to the received selection operation, wherein the video switching instruction comprises the selected viewing angle information or the tracking subject information.
In this embodiment, after the video window is displayed in the video playing interface, a user can enter and exit the selection operation by clicking a screen or the like, so that the client can determine the corresponding video window according to the received selection operation.
Since the plurality of video windows correspond to the viewing angle information or the tracking subject information, when the video window corresponding to the selection operation of the user is determined, the video switching instruction can be determined directly according to the viewing angle information or the tracking subject information corresponding to the video window.
Specifically, the video switching instruction may directly include viewing perspective information or tracking subject information of the selected video window; or the video switching instruction may directly include the ID value of the video window, so that the viewing angle information or the tracking subject information corresponding to the ID value may be determined according to the ID value of the video window, which is not limited in this embodiment.
S303, sending the video switching instruction to a server side so that the server side returns the first video data corresponding to the video switching instruction.
In this embodiment, the server may include a plurality of first video data, and record viewing angle information or tracking subject information corresponding to each first video data. After receiving the video switching instruction, the server may determine first video data corresponding to the video switching instruction and return the first video data to the client.
S304, receiving the first video data returned in response to the video switching instruction.
For example, in this embodiment, before receiving the first video data returned in response to the video switching instruction, the method further includes: amplifying the selected video window according to the selection operation; the playing the first video data indicated by the video switching instruction includes: and playing the first video data corresponding to the video switching instruction in the amplified video window so as to achieve a better playing effect and provide user experience.
In this embodiment, a plurality of candidate video windows are displayed on the video playing interface, after it is determined that a user selects a certain candidate video window, the selected video window may be enlarged, and other video windows except the selected candidate video window may be reduced or slid out of the video playing interface. After receiving the first video data indicated by the video switching instruction, the first video data can be played in an enlarged video window so that the user can watch the first video data selected by the user.
S305, playing the first video data indicated by the video switching instruction.
For example, in this embodiment, step S305 may include: determining type information of the first video data according to the video switching instruction, wherein the type information comprises an augmented reality video type or a virtual reality video type; performing corresponding augmented reality processing or virtual reality processing on the first video data according to the determined type information, and obtaining processed first video data; and playing the processed first video data. Therefore, the type information can be determined according to the video switching instruction, and the virtual reality or augmented reality display is carried out, so that the user experience is improved.
For example, in this embodiment, the type information may be directly determined according to the client of the user, for example, if the client of the user is a virtual reality device, the default type information is a virtual reality video type; and if the client of the user is the augmented reality equipment, the default type information is the augmented reality type information.
If the type information is a virtual reality type, the first video data can be subjected to virtual reality processing and played through virtual reality equipment of the user. For example, when a user watches live broadcasting at home, the first video data is processed in a virtual reality mode and played, so that the user has a feeling of being personally on the scene in the watching process, and the participation degree of the user is improved. In addition, if the user watches through the virtual reality device, the video switching information can be directly generated according to the movement data of the user.
For example, when watching a live broadcast, a user can directly display the live broadcast to the user through the virtual reality device, and can display first video data corresponding to a watching angle according to a video switching instruction of the user, so that the user can feel the effect of directly watching the live broadcast at the watching angle; alternatively, the first video data of the tracking subject may be presented at a video switching instruction of the user, so that the user produces an effect of moving along with the tracking subject.
If the type information is the augmented reality type, the augmented reality video and the scene where the user is located can be combined for display, for example, if the user watches a match, the corresponding video can be directly superimposed on the desktop of the scene where the user is located through the augmented reality device for display. By carrying out augmented reality processing, the content richness of images in the video watched by the user can be improved, and further the user experience is improved.
It should be noted that the virtual reality processing procedure and the augmented reality processing procedure may be executed at a server or by a client, which is not limited in this embodiment. For a specific method for performing virtual reality processing or augmented reality processing, reference may be made to related technologies, which are not described herein again.
For example, in this embodiment, the receiving the first video data returned in response to the video switching instruction includes: and receiving first video data returned in response to the video switching instruction in the video live broadcasting process. Therefore, the user can conveniently watch the live broadcast.
The following provides an exemplary description of the solution of the present application through a specific application scenario.
As shown in fig. 3B, a stage (i.e., a target area) is shown, on which a plurality of image capturing devices at different positions are arranged, so that the image capturing is performed on the stage at a plurality of positions, and a plurality of second video data is obtained.
The second video data may be transmitted to the server, thereby determining the first video data from the plurality of second video data. For example, an actor a may be included on the stage, and when moving from one end of the stage to the other end of the stage, first video data for tracking the actor a may be obtained by splicing a plurality of second video data; alternatively, the video data captured by the image capturing device directly facing the middle of the stage may be used as the first video data having the set viewing angle.
When a user watches through the client, as shown in fig. 3C, a squared figure composed of video windows is displayed in the video playing interface, each video window corresponds to a first video data, and a thumbnail of the first video data and the like can be displayed in the video window. The first video data may include first video data corresponding to respective viewing perspectives, or first video data corresponding to actors a, B, C, etc. (tracking subjects), may further include first video data corresponding to a stage panorama, and the like.
The user can select the corresponding first video data by triggering the video window of the video playing interface, and the client can generate and send a video switching instruction according to the selection operation of the user so as to receive the first video data responding to the video switching instruction.
While the first video data is received, after the user inputs a selection operation, the video window selected by the user may be displayed in an enlarged manner, and other windows except the selected video window may be reduced to the edge of the interface, for example, as shown in fig. 3D. Of course, other windows than the selected video window may also be destroyed directly and not displayed.
For example, if the user selects the first video data corresponding to the actor a, that is, the first video data generated by using the actor a as the tracking subject, and if the actor a moves from one end of the stage to the other end, the generated first video data is the first video data obtained by splicing segments of the plurality of second video data including the actor a in a time sequence or a position sequence.
Through the scheme provided by the embodiment, the plurality of video windows can be displayed to the user in advance, and the first video data corresponding to the selected video window is played according to the selection of the user, so that the user can operate conveniently, and the user experience is further improved.
The video playing method of the present embodiment may be executed by any suitable electronic device with data processing capability, including but not limited to: server, mobile terminal (such as mobile phone, PAD, etc.), PC, etc.
Example four
Referring to fig. 4, which shows a schematic flow chart of a video playing method of the present application, the scheme provided in this embodiment is applied to a server, and as shown in the figure, the scheme includes:
s401, receiving a plurality of second video data, wherein the second video data are a plurality of video data obtained by image acquisition of a target area by image acquisition equipment at a plurality of positions.
For example, in any embodiment of the present application, the image capturing devices at multiple positions may be: the tracking system comprises image acquisition equipment at different angles relative to a target area, image acquisition equipment at different distances relative to the target area, image acquisition equipment for performing image acquisition along with a tracking subject and the like.
The different angles relative to the target area can be that the included angle between the central axis of the target area and the target area is 30 degrees, and the like. Of course, the foregoing is merely illustrative and is not intended to be a limitation of the present application.
The image acquisition equipment at different angles relative to the target area acquires images, and second video data at different watching angles can be obtained, so that the integrity of the second video data can be ensured, for example, the second video data can be ensured to cover the whole stage as much as possible.
The image capturing devices at different distances with respect to the target area may be an image capturing device for a long shot, an image capturing device for a panoramic view, an image capturing device for a medium shot, an image capturing device for a close shot, or an image capturing device for close-up, etc. By collecting images according to different distances, second video data under different distances can be obtained, and the content played by the user can be richer.
The image acquisition following the tracked subject may be: the image acquisition equipment and the tracking main body move synchronously to carry out dynamic tracking, therefore, hot spot characters and the like can be shot with the whole course, and the obtained second video data can better meet the requirements of users.
In addition, if the position, the angle and the like of the image acquisition device can be adjusted, the video switching instruction may further include acquisition parameters, such as a viewing angle, a viewing distance and the like, and the position and the like of the image acquisition device may be adjusted according to the acquisition parameters, and the adjusted image acquisition device is acquired to acquire an image, so as to obtain second video data meeting the user requirement, and generate first video data meeting the user requirement.
S402, determining a plurality of first video data with different viewing angles or different tracking subjects according to the plurality of second video data.
For example, in an implementation of the present application, a plurality of second video data collected at the same position and using the same viewing angle may be combined in a time sequence, and first video data corresponding to the viewing angle may be obtained.
Specifically, the image capturing apparatus may capture and upload a video clip (second video data) of 30s at a time, and thereby, the plurality of video clips uploaded by the image capturing apparatus may be temporally combined to obtain the first video data having the viewing angle.
For example, the first video data with different viewing angles may be specifically first video data obtained by splicing second video data acquired at multiple viewing angles. For example, the second video data acquired by the image acquisition device a and the second video data acquired by the image acquisition device B may be spliced to obtain the first video data corresponding to the larger viewing angle; or, if the character a and the character B are interacting, the second video data including the two characters may be spliced to obtain the first video data, so that the user can see the two interacting characters through a picture corresponding to the first video data.
The second video data acquired by the image acquisition devices at different distances relative to the target area may be synthesized based on the acquisition distances to obtain the first video data. For example, depth-of-field synthesis can be performed on second video data acquired by image acquisition devices at different distances to obtain first video data, so as to improve video definition corresponding to the first video data; alternatively, the second video data acquired by the image acquisition devices at different distances are directly overlapped to generate the first video data, so that a plurality of overlapped sub-images can be included in one frame of image in the first video data, for example, the second video data close-up to a human face is overlapped on the second video data including the whole human body to obtain the first video data.
For example, in another implementation of the present application, a plurality of second video data including the same tracking subject are combined according to a relative positional relationship between positions of image acquisition and a time sequence, and first video data corresponding to the tracking subject is obtained.
For example, the plurality of first video data of different tracked subjects may be video data obtained by synthesizing video segments including the target subject in a time order and a relative positional relationship between positions of image acquisition. For example, a person a moves from the left side of the stage to the right side of the stage, and six image capturing devices respectively capture images of the person a during the movement process, the second video data of the six image capturing devices may be processed, video segments including the person a are edited, and the video segments are spliced according to the relative position relationship and the time sequence between the image capturing positions, so as to obtain first video data corresponding to the movement process of the person a moving from the left side of the stage to the right side of the stage.
For example, in any embodiment of the present application, the video data may be processed by using a big data computing capability to obtain at least one piece of video data.
Specifically, the video data may be subjected to face recognition or article recognition through the big data computing power, and the video data is processed according to the recognition result, for example, a part of the video data including the same person or the same article is combined into one video data, or video data corresponding to adjacent viewing perspectives are combined to obtain video data corresponding to a larger viewing perspective. The video data is processed by large data computing capacity, so that the speed of obtaining the video data can be improved, and the capacity requirement on broadcasting guidance and the like can be reduced.
In addition, the first video data may be merged by means of depth map reconstruction, for example, for the second video data acquired by the two image acquisition devices, a depth image corresponding to the second video data may be obtained, where the depth image is also called a range image, and is an image that reflects the distance from the image acquisition device to each point of the target region and serves as a pixel value, and the depth image may directly reflect the geometric shape of the surface included in the target region. The image transformation matrixes corresponding to the second video data can be determined through the depth images corresponding to the different second video data, and the second video images are transformed through the image transformation matrixes, so that the same geometric shapes included in the second video images can be directly overlapped, and the second video images can be combined.
S403, according to the received video switching instruction, determining corresponding first video data from the plurality of first video data, and sending the corresponding first video data.
In this embodiment, the first video data may be sent to a client corresponding to the video switching instruction, and played by the client to be displayed to the user.
For example, in any embodiment of the present application, step S403 includes: according to the watching visual angle information or tracking subject information carried in the video switching instruction, determining first video data matched with the watching visual angle information or first video data matched with the tracking subject information from the plurality of first video data; and transmitting the determined first video data.
In this embodiment, after the client receives a video switching instruction of a user, the video switching instruction may be sent to the processing device, and the processing device may query, according to the received video switching instruction, first video data corresponding to the video switching instruction from the plurality of first video data, specifically, may query, according to viewing angle information or tracking subject information included in the video switching instruction, the matched first video data, and send the queried first video data to the client.
In addition, in this embodiment, the server may perform virtual reality processing or augmented reality processing on the first video data obtained by querying according to the video switching instruction, and then send the first video data to the client, so that the client plays the first video data after the virtual reality processing or augmented reality processing.
The video playing method of the present embodiment may be executed by any suitable electronic device with data processing capability, including but not limited to: server, mobile terminal (such as mobile phone, PAD, etc.), PC, etc.
EXAMPLE five
Referring to fig. 5, a schematic structural diagram of a video playback apparatus of the present application is shown, which includes: a first video receiving module 501 and a video playing module 502.
A first video receiving module 501, configured to receive first video data returned in response to a video switching instruction, where the first video data is determined according to second video data obtained by image acquisition of a target area by image acquisition devices in multiple positions;
the video playing module 502 is configured to play the first video data indicated by the video switching instruction.
In any embodiment of the present application, the apparatus further includes: the window display module is used for displaying a plurality of candidate video windows in a video playing interface, wherein the video windows are used for displaying video data corresponding to a watching visual angle or displaying video data corresponding to a tracking main body; the instruction determining module is used for determining a video switching instruction according to the video window corresponding to the received selection operation, wherein the video switching instruction comprises selected viewing angle information or tracking subject information; and the instruction sending module is used for sending the video switching instruction to a server so that the server returns the first video data corresponding to the video switching instruction.
In any embodiment of the present application, the apparatus further includes: the window amplification module is used for amplifying the selected video window according to the selection operation; the video playing module 502 is specifically configured to: and playing the first video data corresponding to the video switching instruction in the enlarged video window.
In an embodiment of the present application, the presenting a plurality of candidate video windows in a video playing interface includes: according to the received recommendation information, a plurality of candidate video windows are determined from a plurality of preset video windows, and the candidate video windows are displayed in a video playing interface.
For example, in any embodiment of the present application, the video playing module 502 includes: a type determining module, configured to determine type information of the first video data according to the video switching instruction, where the type information includes an augmented reality video type or a virtual reality video type; the processing module is used for performing corresponding augmented reality processing or virtual reality processing on the first video data according to the determined type information and obtaining processed first video data; and the playing module is used for playing the processed first video data.
For example, in any embodiment of the present application, the first video receiving module 501 is specifically configured to: and receiving first video data returned in response to the video switching instruction in the video live broadcasting process.
The video playing apparatus of this embodiment is used to implement the corresponding video playing method in the foregoing multiple method embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein again. In addition, the functional implementation of each module in the video playing device of this embodiment can refer to the description of the corresponding part in the foregoing method embodiment, and is not repeated here.
According to the scheme provided by the embodiment, a user can input a video switching instruction to the client, so that the first video data displayed by the client can be switched autonomously, the first video data is determined according to the second video data obtained by acquiring the image of the target area by the image acquisition equipment at multiple positions, therefore, the second video data obtained by the image acquisition equipment at multiple positions can be switched by the video switching instruction, namely, automatic video picture guide broadcasting is carried out according to the video switching instruction of the user, the pictures which can be watched by the user are various, the problem that the watching content of the user is single in the prior art is solved, and the user experience is improved.
Example six
Referring to fig. 6, a schematic structural diagram of a video playback apparatus of the present application is shown, which includes: a second video receiving module 601, a processing module 602, and a sending module 603.
A second video receiving module 601, configured to receive a plurality of second video data, where the second video data are a plurality of video data obtained by image acquisition of a target area by image acquisition equipment in multiple positions;
a processing module 602, configured to determine, according to the plurality of second video data, a plurality of first video data with different viewing perspectives or different tracking subjects, and determine, according to a received video switching instruction, corresponding first video data from the plurality of first video data;
a sending module 603, configured to send the corresponding first video data.
For example, in any embodiment of the present application, the processing module 602 is specifically configured to at least one of: synthesizing a plurality of second video data which are collected at the same position and by using the same viewing angle according to a time sequence, and obtaining first video data corresponding to the viewing angle; and synthesizing a plurality of second video data containing the same tracking subject according to the relative position relationship between the image acquisition positions and the time sequence, and obtaining first video data corresponding to the tracking subject.
For example, in any embodiment of the present application, the sending module 603 includes: the matching module is used for determining first video data matched with the viewing angle information or first video data matched with the tracking subject information from the plurality of first video data according to the viewing angle information or the tracking subject information carried in the video switching instruction; and the sending submodule is used for sending the determined first video data.
The video playing apparatus of this embodiment is used to implement the corresponding video playing method in the foregoing multiple method embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein again. In addition, the functional implementation of each module in the video playing device of this embodiment can refer to the description of the corresponding part in the foregoing method embodiment, and is not repeated here.
According to the scheme provided by the embodiment, a user can input a video switching instruction to the client, so that the first video data displayed by the client can be switched autonomously, the first video data is determined according to the second video data obtained by acquiring the image of the target area by the image acquisition equipment at multiple positions, therefore, the second video data obtained by the image acquisition equipment at multiple positions can be switched by the video switching instruction, namely, automatic video picture guide broadcasting is carried out according to the video switching instruction of the user, the pictures which can be watched by the user are various, the problem that the watching content of the user is single in the prior art is solved, and the user experience is improved.
EXAMPLE seven
Referring to fig. 7, a schematic structural diagram of an electronic device according to a seventh embodiment of the present application is shown, and the specific embodiment of the present application does not limit a specific implementation of the electronic device.
As shown in fig. 7, the electronic device may include: an input device 701, a processor 702, a display device 703.
Wherein:
an input device 701 for receiving a video switching instruction.
The input device 701 may specifically include a physical input device, such as a handle, a button, a mouse, and the like, and may also include a voice input device, which is not limited in this embodiment.
The processor 702 is configured to receive first video data returned in response to the video switching instruction, where the first video data is determined according to second video data obtained by image capturing, by the image capturing device at multiple positions, the target area.
The display device 703 is configured to play the first video data indicated by the video switching instruction.
The electronic device may also include a communication interface 704 for communicating with other electronic devices or servers.
The electronic device may further include a communication bus 705, and the input device 701, the processor 702, the display device 703 and the communication interface 704 may communicate with each other through the communication bus 705.
The processor 702 is configured to execute the program 706, and may specifically execute relevant steps in the above-described video playing method embodiment.
In particular, the program 706 may include program code comprising computer operating instructions.
The processor 702 may be a central processing unit CPU, or an Application Specific Integrated Circuit ASIC (Application Specific Integrated Circuit), or one or more Integrated circuits configured to implement embodiments of the present Application. The intelligent device comprises one or more processors which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
The electronic device may also include a memory 707 for storing the program 706. The memory 707 may comprise high-speed RAM memory, and may also include non-volatile memory, such as at least one disk memory.
For specific implementation of each step in the program 706, reference may be made to corresponding steps and corresponding descriptions in units in the foregoing embodiments of the video playing method, which are not described herein again. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described devices and modules may refer to the corresponding process descriptions in the foregoing method embodiments, and are not described herein again.
Example eight
Referring to fig. 8, a schematic structural diagram of an electronic device according to an eighth embodiment of the present application is shown, and the specific embodiment of the present application does not limit a specific implementation of the electronic device.
As shown in fig. 8, the electronic device may include: an input device 801, a processor 802, an output device 803.
The input device 801 is configured to receive a plurality of second video data, where the second video data are a plurality of video data obtained by image capture devices at multiple positions correspondingly capturing images of a target area, and may also be configured to receive a video switching instruction.
A processor 802, configured to determine, according to the plurality of second video data, a plurality of first video data having different viewing perspectives or different tracking subjects, and, according to the received video switching instruction, determine corresponding first video data from the plurality of first video data.
The processor 802 may be a central processing unit CPU, or an Application Specific Integrated Circuit ASIC (Application Specific Integrated Circuit), or one or more Integrated circuits configured to implement embodiments of the present Application. The intelligent device comprises one or more processors which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
An output device 803, configured to send the corresponding first video data.
The electronic device may also include a communication interface 804 for communicating with other electronic devices or servers.
The electronic device may further include a communication bus 805, and the input device 801, the processor 802, the output device 803 and the communication interface 804 may communicate with each other through the communication bus 805.
The processor 802 is configured to execute the program 806, and may specifically execute relevant steps in the above-described video playing method embodiment.
In particular, the program 806 may include program code comprising computer operating instructions.
The electronic device may also include a memory 807 for storing the program 806. The memory 807 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
For specific implementation of each step in the program 806, reference may be made to corresponding steps and corresponding descriptions in units in the foregoing embodiments of the video playing method, which are not described herein again. It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described devices and modules may refer to the corresponding process descriptions in the foregoing method embodiments, and are not described herein again.
It should be noted that, according to the implementation requirement, each component/step described in the embodiment of the present application may be divided into more components/steps, and two or more components/steps or partial operations of the components/steps may also be combined into a new component/step to achieve the purpose of the embodiment of the present application.
The above-described methods according to embodiments of the present application may be implemented in hardware, firmware, or as software or computer code storable in a recording medium such as a CD ROM, a RAM, a floppy disk, a hard disk, or a magneto-optical disk, or as computer code originally stored in a remote recording medium or a non-transitory machine-readable medium downloaded through a network and to be stored in a local recording medium, so that the methods described herein may be stored in such software processes on a recording medium using a general-purpose computer, a dedicated processor, or programmable or dedicated hardware such as an ASIC or FPGA. It will be appreciated that the computer, processor, microprocessor controller or programmable hardware includes memory components (e.g., RAM, ROM, flash memory, etc.) that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the video playback methods described herein. Further, when a general-purpose computer accesses code for implementing the video playback methods shown herein, execution of the code transforms the general-purpose computer into a special-purpose computer for performing the video playback methods shown herein.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
The above embodiments are only used for illustrating the embodiments of the present application, and not for limiting the embodiments of the present application, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the embodiments of the present application, so that all equivalent technical solutions also belong to the scope of the embodiments of the present application, and the scope of the patent protection of the embodiments of the present application should be defined by the claims.

Claims (13)

1. A video playback method, comprising:
receiving first video data returned in response to a video switching instruction, wherein the first video data is determined according to second video data obtained by image acquisition of a target area by image acquisition equipment at multiple positions, and the first video data is obtained by cutting and splicing multiple second video data to obtain the first video data of which the face or hand of a main broadcast is positioned in a picture;
and playing the first video data indicated by the video switching instruction.
2. The method of claim 1, wherein prior to receiving the first video data returned in response to the video switching instruction, the method further comprises:
displaying a plurality of candidate video windows in a video playing interface, wherein the video windows are used for displaying video data corresponding to a watching visual angle or displaying video data corresponding to a tracking main body;
determining a video switching instruction according to the video window corresponding to the received selection operation, wherein the video switching instruction comprises selected viewing angle information or tracking subject information;
and sending the video switching instruction to a server so that the server returns the first video data corresponding to the video switching instruction.
3. The method of claim 2, wherein prior to receiving the first video data returned in response to the video switching instruction, the method further comprises:
amplifying the selected video window according to the selection operation;
the playing the first video data indicated by the video switching instruction comprises:
and playing the first video data corresponding to the video switching instruction in the enlarged video window.
4. The method of claim 2, wherein said presenting a plurality of candidate video windows in a video playback interface comprises:
according to the received recommendation information, a plurality of candidate video windows are determined from a plurality of preset video windows, and the candidate video windows are displayed in a video playing interface.
5. The method of claim 1, wherein the playing the first video data indicated by the video switching instruction comprises:
determining type information of the first video data according to the video switching instruction, wherein the type information comprises an augmented reality video type or a virtual reality video type;
performing corresponding augmented reality processing or virtual reality processing on the first video data according to the determined type information, and obtaining processed first video data;
and playing the processed first video data.
6. The method of claim 1, wherein the receiving the first video data returned in response to the video switching instruction comprises:
and receiving first video data returned in response to the video switching instruction in the video live broadcasting process.
7. A video playback method, comprising:
receiving a plurality of second video data, wherein the second video data are a plurality of video data obtained by image acquisition of a target area by image acquisition equipment at a plurality of positions;
determining a plurality of first video data having different viewing perspectives or different tracked subjects from the plurality of second video data;
determining corresponding first video data from the plurality of first video data according to the received video switching instruction, and sending the corresponding first video data;
the determining, from the plurality of second video data, a plurality of first video data having different viewing perspectives or different tracked subjects includes: and cutting and splicing the plurality of second video data to obtain the first video data of which the face or the hand of the anchor is positioned in the picture.
8. The method of claim 7, wherein the determining corresponding first video data from the plurality of first video data and transmitting the corresponding first video data according to the received video switching instruction comprises:
according to the watching visual angle information or tracking subject information carried in the video switching instruction, determining first video data matched with the watching visual angle information or first video data matched with the tracking subject information from the plurality of first video data;
and transmitting the determined first video data.
9. A video playback apparatus comprising:
the first video receiving module is used for receiving first video data returned in response to a video switching instruction, wherein the first video data is determined according to second video data obtained by image acquisition of a target area by image acquisition equipment at multiple positions, and the first video data is obtained by cutting and splicing multiple second video data to obtain the first video data of a main broadcast with a face or a hand in a picture;
and the video playing module is used for playing the first video data indicated by the video switching instruction.
10. A video playback apparatus comprising:
the second video receiving module is used for receiving a plurality of second video data, wherein the second video data are a plurality of video data obtained by image acquisition of a target area by image acquisition equipment at a plurality of positions correspondingly;
the processing module is used for determining a plurality of first video data with different viewing visual angles or different tracking subjects according to the plurality of second video data, and determining corresponding first video data from the plurality of first video data according to a received video switching instruction;
the sending module is used for sending the corresponding first video data;
the processing module is specifically configured to clip and splice the plurality of second video data to obtain the first video data of which the face or hand of the anchor is located in the picture.
11. An electronic device, comprising:
the input equipment is used for receiving the video switching instruction;
the processor is used for receiving first video data returned in response to a video switching instruction, wherein the first video data is determined according to second video data obtained by image acquisition of a target area by image acquisition equipment at multiple positions, and the first video data is obtained by cutting and splicing multiple second video data to obtain the first video data of a main broadcast with a face or a hand in a picture;
and the display equipment is used for playing the first video data indicated by the video switching instruction.
12. An electronic device, comprising:
the input device is used for receiving a plurality of second video data, wherein the second video data are a plurality of video data obtained by correspondingly acquiring images of a target area by image acquisition equipment at a plurality of positions;
a processor, configured to determine, according to the plurality of second video data, a plurality of first video data having different viewing perspectives or different tracking subjects, and configured to determine, according to a received video switching instruction, corresponding first video data from the plurality of first video data;
the output equipment is used for sending the corresponding first video data;
the processor is specifically configured to clip and splice the plurality of second video data to obtain the first video data of which the face or hand of the anchor is located in the picture.
13. A computer storage medium having stored thereon a computer program which, when executed by a processor, implements a video playback method as claimed in any one of claims 1 to 8.
CN202010632481.3A 2020-07-03 2020-07-03 Video playing method and device, electronic equipment and computer storage medium Active CN113301351B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010632481.3A CN113301351B (en) 2020-07-03 2020-07-03 Video playing method and device, electronic equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010632481.3A CN113301351B (en) 2020-07-03 2020-07-03 Video playing method and device, electronic equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN113301351A CN113301351A (en) 2021-08-24
CN113301351B true CN113301351B (en) 2023-02-24

Family

ID=77318161

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010632481.3A Active CN113301351B (en) 2020-07-03 2020-07-03 Video playing method and device, electronic equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN113301351B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115733993A (en) * 2021-08-27 2023-03-03 北京字节跳动网络技术有限公司 Network live broadcast method, device, storage medium and electronic equipment
CN113784149B (en) * 2021-09-10 2023-09-19 咪咕数字传媒有限公司 Method, device and equipment for displaying heat region of video signal
CN114025183B (en) * 2021-10-09 2024-05-14 浙江大华技术股份有限公司 Live broadcast method, device, equipment, system and storage medium
CN113938711A (en) * 2021-10-13 2022-01-14 北京奇艺世纪科技有限公司 Visual angle switching method and device, user side, server and storage medium
CN114189542A (en) * 2021-11-23 2022-03-15 阿里巴巴(中国)有限公司 Interaction control method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106791906A (en) * 2016-12-31 2017-05-31 北京星辰美豆文化传播有限公司 A kind of many people's live network broadcast methods, device and its electronic equipment
CN106937128A (en) * 2015-12-31 2017-07-07 幸福在线(北京)网络技术有限公司 A kind of net cast method, server and system and associated uses
CN107547940A (en) * 2017-09-13 2018-01-05 广州酷狗计算机科技有限公司 Video playback processing method, equipment and computer-readable recording medium
CN108965907A (en) * 2018-07-11 2018-12-07 北京字节跳动网络技术有限公司 For playing the methods, devices and systems of video
CN109889914A (en) * 2019-03-08 2019-06-14 腾讯科技(深圳)有限公司 Video pictures method for pushing, device, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106937128A (en) * 2015-12-31 2017-07-07 幸福在线(北京)网络技术有限公司 A kind of net cast method, server and system and associated uses
CN106791906A (en) * 2016-12-31 2017-05-31 北京星辰美豆文化传播有限公司 A kind of many people's live network broadcast methods, device and its electronic equipment
CN107547940A (en) * 2017-09-13 2018-01-05 广州酷狗计算机科技有限公司 Video playback processing method, equipment and computer-readable recording medium
CN108965907A (en) * 2018-07-11 2018-12-07 北京字节跳动网络技术有限公司 For playing the methods, devices and systems of video
CN109889914A (en) * 2019-03-08 2019-06-14 腾讯科技(深圳)有限公司 Video pictures method for pushing, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113301351A (en) 2021-08-24

Similar Documents

Publication Publication Date Title
CN113301351B (en) Video playing method and device, electronic equipment and computer storage medium
CN109889914B (en) Video picture pushing method and device, computer equipment and storage medium
US9751015B2 (en) Augmented reality videogame broadcast programming
JP6385139B2 (en) Information processing apparatus, information processing method, and program
US20080109729A1 (en) Method and apparatus for control and processing of video images
CN105828091A (en) Method and system for video program playing in network broadcast
JP2019159950A (en) Information processing device and information processing method
US20120120201A1 (en) Method of integrating ad hoc camera networks in interactive mesh systems
KR101739220B1 (en) Special Video Generation System for Game Play Situation
CN114245210B (en) Video playing method, device, equipment and storage medium
US11622099B2 (en) Information-processing apparatus, method of processing information, and program
KR20190031220A (en) System and method for providing virtual reality content
JP7423974B2 (en) Information processing system, information processing method and program
JP2022073651A (en) Information processing apparatus, information processing method, and program
JP6659184B2 (en) Information processing apparatus, information processing method and program
CN113542721A (en) Depth map processing method, video reconstruction method and related device
JP2022545880A (en) Codestream processing method, device, first terminal, second terminal and storage medium
TW201125358A (en) Multi-viewpoints interactive television system and method.
CN112804455A (en) Remote interaction method and device, video equipment and computer readable storage medium
CN113473244A (en) Free viewpoint video playing control method and device
JP2020067716A (en) Information processing apparatus, control method and program
KR101164895B1 (en) A system producing a playing image in runner's views for the baseball game broadcasting and the recording medium thereof
US20230291883A1 (en) Image processing system, image processing method, and storage medium
US20240214614A1 (en) Multi-camera multiview imaging with fast and accurate synchronization
US20240214543A1 (en) Multi-camera multiview imaging with fast and accurate synchronization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant