CN111683281A - Video playing method and device, electronic equipment and storage medium - Google Patents

Video playing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111683281A
CN111683281A CN202010502116.0A CN202010502116A CN111683281A CN 111683281 A CN111683281 A CN 111683281A CN 202010502116 A CN202010502116 A CN 202010502116A CN 111683281 A CN111683281 A CN 111683281A
Authority
CN
China
Prior art keywords
playing
video
scene picture
virtual reality
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010502116.0A
Other languages
Chinese (zh)
Inventor
杨广煜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010502116.0A priority Critical patent/CN111683281A/en
Publication of CN111683281A publication Critical patent/CN111683281A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/4363Adapting the video stream to a specific local network, e.g. a Bluetooth® network
    • H04N21/43637Adapting the video stream to a specific local network, e.g. a Bluetooth® network involving a wireless protocol, e.g. Bluetooth, RF or wireless LAN [IEEE 802.11]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/458Scheduling content for creating a personalised stream, e.g. by combining a locally stored advertisement with an incoming stream; Updating operations, e.g. for OS modules ; time-related management operations
    • H04N21/4586Content update operation triggered locally, e.g. by comparing the version of software modules in a DVB carousel to the version stored locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
    • H04N21/8586Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot by using a URL

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a video playing method and device, electronic equipment and a storage medium. The method comprises the following steps: video playing is carried out; in the video playing process, responding to a mode switching instruction to obtain a target scene picture, wherein the target scene picture is a virtual reality scene picture corresponding to the current playing content, and the virtual reality scene picture corresponds to a specified area included by the playing content; and playing the target scene picture. Therefore, by establishing the corresponding relation between the video playing content and the virtual reality scene picture corresponding to the designated area included by the playing content, the electronic equipment can quickly display the scene area in the video through the virtual display technology in the video playing process, the scene area display mode of the electronic equipment in the video playing process is enriched, and the user experience of a user in the video watching process is improved.

Description

Video playing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of video technologies, and in particular, to a video playing method and apparatus, an electronic device, and a storage medium.
Background
With the development of video technology, more electronic devices can be equipped with video players for playing videos. Some scene areas may be included in the video, for example, a forest or some buildings may be included, but in the process of playing the video content by the electronic device, the played video is generally generated based on a 2D format, so that the electronic device cannot better show the scene areas in view, and the user cannot experience the scene areas shown in the video content well.
Disclosure of Invention
In view of the above, the present application provides a video playing method, an apparatus, an electronic device and a storage medium to improve the above problem.
In one aspect, the present application provides a video playing method, where the method includes: video playing is carried out; in the video playing process, responding to a mode switching instruction to obtain a target scene picture, wherein the target scene picture is a virtual reality scene picture corresponding to the current playing content, and the virtual reality scene picture corresponds to a specified area included by the playing content; and playing the target scene picture.
In another aspect, the present application provides a video playback device, including: the device comprises a video playing unit, a virtual scene acquisition unit and a virtual scene playing unit. The video playing unit is used for playing videos; a virtual scene obtaining unit, configured to obtain a target scene picture in response to a mode switching instruction in a playing process of the video, where the target scene picture is a virtual reality scene picture corresponding to current playing content, and the virtual reality scene picture corresponds to a specified area included in the playing content; and the virtual scene playing unit is used for playing the target scene picture.
In another aspect, the present application provides an electronic device comprising a processor and a memory; one or more programs are stored in the memory and configured to be executed by the processor to implement the methods described above.
In another aspect, the present application provides a computer-readable storage medium having program code stored therein, wherein the method described above is performed when the program code is executed by a processor.
According to the video playing method, the video playing device, the electronic device and the storage medium, under the condition that the playing content of the video corresponds to the virtual reality scene picture, the virtual reality scene picture corresponding to the current playing content can be obtained as the target scene picture in response to the mode switching instruction in the playing process of the video, and the target scene picture is played. Therefore, by establishing the corresponding relation between the video playing content and the virtual reality scene picture corresponding to the designated area included by the playing content, the playing of the virtual reality scene picture corresponding to the designated area in the playing content can be triggered through the mode switching instruction in the video playing process, the electronic equipment can quickly display the scene area in the video through the virtual display technology in the video playing process, the scene area display mode of the electronic equipment in the video playing process is enriched, the user can experience the scene area in the playing content in the visual sense better, and the user experience of the user in the video watching process is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram illustrating an application scenario of a video playing method according to an embodiment of the present application;
fig. 2 is a schematic diagram illustrating another application scenario of a video playing method according to an embodiment of the present application;
fig. 3 is a flowchart illustrating a video playing method according to an embodiment of the present application;
FIG. 4 is a diagram illustrating a play call proposed by an embodiment of the present application;
fig. 5 is a flowchart illustrating a video playing method according to still another embodiment of the present application;
fig. 6 is a schematic diagram illustrating a prompt interface according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram illustrating an icon as a prompt message according to an embodiment of the present application;
fig. 8 is a schematic diagram illustrating a trigger to switch to playing a virtual reality scene according to an embodiment of the present application;
FIG. 9 is a diagram illustrating a first prompt identification proposed by an embodiment of the present application;
FIG. 10 is a schematic diagram of the interface of FIG. 9 after the first prompt identifier is touched;
FIG. 11 is a diagram illustrating a second hint identifier proposed by an embodiment of the present application;
FIG. 12 is a schematic interface diagram illustrating the electronic device in the portrait mode in an embodiment of the present application;
fig. 13 is a schematic diagram illustrating a display manner of a plurality of prompt identifiers to be selected when the electronic device is in the portrait mode in the embodiment of the present application;
fig. 14 is a schematic diagram illustrating comparison of a first prompt identifier, a second prompt identifier, and a progress identifier according to an embodiment of the present application;
fig. 15 is a flowchart illustrating a video playing method according to another embodiment of the present application;
fig. 16 is a schematic diagram illustrating a selection target scene screen according to an embodiment of the present application;
fig. 17 is a flowchart illustrating a video playing method according to another embodiment of the present application;
fig. 18 is a flowchart illustrating a video playing method according to another embodiment of the present application;
fig. 19 is a block diagram illustrating a structure of a video playback apparatus according to an embodiment of the present application;
fig. 20 is a block diagram illustrating a video playback apparatus according to another embodiment of the present application;
fig. 21 is a block diagram illustrating another electronic device for executing a video playback method according to an embodiment of the present application;
fig. 22 illustrates a storage unit for storing or carrying program codes for implementing a video playing method according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
With the improvement of the performance of electronic devices and the development of video technologies, more electronic devices support video playing. For example, a client for playing video may be installed in some electronic devices, and the client may obtain the video from a local or network for playing during operation. Some of the scene areas may be included in some videos. The scene area may be understood as the position where the object in the video is located. For example, a person or other object in the video may be in a forest or in a building. Even in some videos about swordsmen, the characters in the video may be in the sky. The woods, buildings and sky therein can be understood as the scenes included in the video.
However, the inventor finds in research that the video played by the electronic device is usually in a 2D format, so that the user cannot experience the area included in the video well. Therefore, in order to improve the problem that the user cannot experience the corresponding region in the video well, the inventor researches and develops the visual experience of the scene region related to the video by means of the virtual reality technology. Among them, the Virtual Reality technology (abbreviated as VR) is also called smart environment technology. The virtual reality technology comprises a computer, electronic information and simulation technology, and the basic realization mode is that the computer simulates a virtual environment so as to provide people with environmental immersion. In this way, the inventor proposes a video playing method, a video playing device, an electronic device, and a storage medium provided by the present application, in the method, a corresponding relationship is established between a video playing content and a virtual reality scene picture corresponding to a specified region included in the video playing content, so that in a video playing process, a virtual reality scene picture corresponding to the specified region in the playing content can be triggered to be played by triggering a mode switching instruction, so that a user can better visually experience the specified region in the playing content, and user experience of the user in a video watching process is improved. The designated area referred to in the embodiments of the present application may be a partial area or all areas in the aforementioned forest, building, sky, and the like. It should be noted that the designated area may be a partial area in a scene area included in the video playing content or all areas in the scene area included in the video playing content.
The following description is provided to an application environment according to an embodiment of the present application.
Referring to fig. 1, the application environment shown in fig. 1 includes a server 100 and an electronic device 200. The electronic device 200 and the server 100 communicate with each other via a network. Optionally, the video playing method proposed in each embodiment of the present application may be executed by the electronic device 200, or may be executed by the server 100, or may be executed by both the electronic device 200 and the server 100, and in the following embodiments, the method is mainly executed by the electronic device 200 as an example for description.
Optionally, as shown in fig. 2, the application environment according to the embodiment of the present application may further include a virtual reality playing device 300 and a control device 400. The virtual reality playing device 300 and the control device 400 can communicate with the electronic device 200 through Wi-Fi, bluetooth, infrared or other communication methods. Among them, the control device 400 may transmit a control instruction to the electronic device 200, for example, may transmit an instruction that triggers the electronic device 200 to switch the played content from a 2D format video to a virtual reality scene picture. The virtual reality playing device 300 may receive the virtual reality scene picture transmitted by the electronic device 200, and display the received virtual reality scene picture.
It should be noted that the server 100 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform. The electronic device 200 may be, but is not limited to, a smart tv, a smart phone, a tablet computer, a notebook computer, a desktop computer, and the like. In addition, the virtual reality playing device 300 may be VR glasses, and the control device 400 may be a remote controller.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 3, fig. 3 is a flowchart illustrating a video playing method according to an embodiment of the present application, where the method includes:
s110: and playing the video.
The electronic equipment can display the playing content of the video in the process of playing the video.
S120: in the playing process of the video, a target scene picture is obtained in response to the mode switching instruction, the target scene picture is a virtual reality scene picture corresponding to the current playing content, and the virtual reality scene picture corresponds to a specified area included in the playing content.
As one approach, a video may be composed of multiple video segments. Optionally, each video clip includes several seconds, tens of seconds, or even longer playing content. In this way, the currently played content can be understood as the played content corresponding to the currently played video clip.
As another way, the currently played content may be understood as content corresponding to the current frame included in the video. It should be noted that the video may be composed of multiple frames, in this way, in the process of playing the video, the electronic device may play the video frame by frame, and correspondingly, the picture content of the frame currently being played may be used as the current playing content.
The virtual reality scene picture corresponding to the playback content may correspond to a specified region included in the playback content. The virtual reality scene picture corresponds to the designated area, and the picture content of the virtual reality scene picture can be understood as the area content of the virtual reality format corresponding to the designated area. For example, the designated area included in the playing content is a piece of forest, and correspondingly, the picture content of the virtual reality scene picture corresponding to the playing content is a forest in a virtual reality format. For another example, the designated area included in the playing content is a palace, and correspondingly, the picture content of the virtual reality scene picture corresponding to the playing content is a palace in a virtual reality format.
S130: and playing the target scene picture.
In this embodiment, the target scene picture may be played in various ways.
As one way, the target scene may be played back by the electronic device. It should be noted that, for the video played as described above, which is in the 2D format, and the target scene picture is a picture in the virtual reality format, the electronic device may play the video by calling different players. The electronic device can play the video by calling the system player, and can call the virtual reality player to play when the target scene picture needs to be played. Illustratively, as shown in fig. 4, the electronic device may invoke different players as needed. It should be noted that the system player and the virtual reality player are both software modules.
Alternatively, the target scene picture may be transmitted to the virtual reality playing device for playing. In this way, the electronic device may transmit the target scene picture to the virtual reality playing device through the network module in a wireless communication manner, and the virtual reality playing device may play the received target scene picture after receiving the target scene picture, so that the user may experience the designated area included in the currently played content in an immersive manner through the virtual reality playing device.
In the video playing method provided by this embodiment, when the playing content of the video corresponds to the virtual reality scene picture, when it is detected that a mode switching instruction triggers during the playing of the video, the virtual reality scene picture corresponding to the current playing content is obtained as the target scene picture in response to the mode switching instruction, and the target scene picture is played. Therefore, by establishing the corresponding relation between the video playing content and the virtual reality scene picture corresponding to the designated area included by the playing content, the virtual reality scene picture corresponding to the designated area in the playing content can be triggered and played by triggering the mode switching instruction in the video playing process, the electronic equipment can quickly display the scene area in the video through the virtual display technology in the video playing process, the scene area display mode of the electronic equipment in the video playing process is enriched, the user can experience the designated area in the playing content in the visual sense better, and the user experience of the user in the video watching process is improved.
Referring to fig. 5, fig. 5 is a flowchart illustrating a video playing method according to an embodiment of the present application, where the method includes:
s210: and playing the video.
As shown in the foregoing, the video may be played by the electronic device, and the electronic device may start playing the video based on various ways.
As one approach, the electronic device may display a video selection interface upon startup. And displaying the video identifications corresponding to the videos to be selected in the video selection interface. And when the video identification is detected to be selected, playing the video corresponding to the selected video identification. Optionally, when it is detected that the video identifier is selected, the currently displayed video selection interface may be switched to a video playing interface, and a video corresponding to the selected video identifier is played in the video playing interface. Optionally, the video identifier may be a screenshot in the playing content of the video to be selected, or may be a video name of the video to be selected.
Alternatively, the electronic device may detect whether there is an incomplete play task after the startup. If an uncompleted play task is detected, the uncompleted play task can be continuously executed to play a video corresponding to the uncompleted play task. Optionally, when the electronic device detects that there is a video that has not been played completely when it is started, it is determined that there is an uncompleted play task, and the prompt interface 10 shown in fig. 6 may be triggered, and when it is detected that the user selects to continue playing in the prompt interface 10, the uncompleted play task is continuously executed to play the video corresponding to the uncompleted play task, and when it is detected that the user selects to abandon to continue playing in the prompt interface, the video selection interface shown above is displayed.
In this embodiment, it is used as a way to detect whether there is an incomplete play task. The electronic device may be configured with a video playing status file, and the video playing status file may record videos that have been played historically and the playing status of each video that has been played historically. The playing status may include not playing and playing. In this case, the electronic device may obtain, by querying the video playing status file, which videos are in the state of not being completely played. Moreover, the last playing time of each video played in history can be stored in the video playing state file, and then after the electronic device acquires the videos in the non-playing state, the video in the non-playing state, of which the corresponding last playing time is closest to the current time, is taken as the video corresponding to the uncompleted playing task. Illustratively, the video in the state of not being completely played acquired in the video playing state file includes a video a and a video B, where the last playing time corresponding to the video a is 10: 10 seconds at 10 points, the last playing time corresponding to the video B is 10: 10 seconds at 11 points, and the current time is 12 points, then the electronic device will identify the video B as the video corresponding to the uncompleted playing task, and further remind the user whether to continue playing the video B when the electronic device is started.
S220: and when the playing content of the video corresponds to the virtual reality scene picture, displaying prompt information in the interface for playing the video.
It should be noted that the prompt information is used to inform the user that some designated areas included in the playing content of the currently played video correspond to virtual display scene pictures, and the user can select whether to play the virtual display scene pictures according to the needs of the user.
In this embodiment, the virtual reality scene picture corresponding to the playing content of the video may be prompted at multiple occasions.
As one mode, when the playing content of the video corresponds to a virtual reality scene picture, the prompt message may be displayed in the interface for playing the video at the moment when the video starts playing. In this way, the user can know earlier which designated areas in the currently played video correspond to the virtual reality scene pictures.
Alternatively, when the currently played content of the video corresponds to a virtual reality scene screen, a prompt message may be displayed in the interface for playing the video.
In one mode, a virtual reality scene picture corresponding to the playing content of the video is associated with the video in advance, so that the playing content and the virtual reality scene picture are corresponding to each other. The corresponding relation between the virtual reality scene picture and the corresponding virtual reality scene picture can be established based on the playing time of the playing content of the video. The playback time is a playback time corresponding to each frame of a screen in the playback content. When a video clip included in a video is used as a play content, the play time corresponding to the play content may be understood as the play time corresponding to each of the multiple frames included in the play content. For example, if the specified region of the peach forest is included at the 10 th minute 10 second of the playing content of the video, a virtual reality scene picture whose picture content is the peach forest may be associated at the 10 th minute 10 second of the playing content of the video, where the 10 th minute 10 second may be understood as the aforementioned playing time.
In this embodiment, there may be multiple ways to detect whether the played video corresponds to a virtual reality scene picture. As one mode, the correspondence between the playback content and the virtual reality scene picture may be configured in a video stream corresponding to the video. It can be understood that the corresponding relationship between the playing content and the virtual reality scene picture includes the scene identification information of the virtual reality scene picture corresponding to the playing content, and which playing time of the playing content the virtual reality scene picture specifically corresponds to. The scene identification information may be url (uniform resource locator) of the virtual reality scene picture.
It should be noted that, when the video is obtained from the network and played, the network encapsulates the video based on the specified transport protocol to obtain the video stream. The obtained video stream includes two parts of contents, one part is the playing control information, and the other part includes the playing contents of the video. The playing control information carries some description information of the playing content of the video. In this way, the correspondence between the playback content and the virtual reality scene picture can be configured in the playback control information. And then after the video stream of the video is acquired, whether the playing content of the played video corresponds to the virtual reality scene or not, and specifically which playing content corresponds to the virtual reality scene can be acquired from the playing control information of the video stream. Optionally, an hls (http live streaming) protocol may be used as a specified transport protocol, correspondingly, an m3u8 file in a data stream generated based on the hls protocol is used as play control information, and further, a corresponding relationship between the play content and the virtual reality scene picture is configured in the m3u8 file.
In this way, the correspondence relationship between the broadcast content and the virtual reality scene picture can be set in the portion storing the broadcast content of the broadcast video, in addition to the broadcast control information, in the manner of setting the correspondence relationship between the broadcast content and the virtual reality scene picture in the video stream corresponding to the video. For example, when the hls protocol is used as the designated transport protocol, the resulting video stream includes two parts, i.e., an m3u8 file and a ts (transport stream) stream, where the ts stream includes information necessary for identifying and transporting the video stream in addition to the video content. For example, the ts stream also has a ts header field and an adaptation field. In this way, the correspondence between the playback content and the virtual reality scene picture can be configured in the information necessary for the identification and transmission of the video stream.
It should be noted that if a new field is added to enable the video stream of the video to carry the corresponding relationship between the playing content and the virtual reality scene picture, a new transmission overhead is caused. As a way of saving transmission overhead, the correspondence between the transmission play content and the virtual reality scene picture can be realized without adding a new field. In this way, the corresponding relationship between the played content and the virtual reality scene picture can be carried in the existing field, and the separator is added to divide the content originally carried in the field, so that the electronic device can acquire the corresponding relationship between the played content and the virtual reality scene picture from the existing field. For example, in a video stream encapsulated based on the hls protocol, the value of the adaptation field is aa, and in a case that the correspondence between the playing content and the virtual reality scene picture includes a correspondence 1 and a correspondence 2, the value of the adaptation field may be configured as aa & correspondence 1& correspondence 2 by configuring a separator "&", so that the electronic device may resolve the value of the adaptation field itself and the correspondence between the playing content of the video and the virtual reality scene picture by the separator "&".
It should be noted that the correspondence relationship 1 and the correspondence relationship 2 are only exemplary. For example, the correspondence relationship 1 may include a first play time and scene identification information of a virtual reality scene picture corresponding to the first play time. The corresponding relation 2 may include a second playing time and scene identification information of the virtual reality scene picture corresponding to the second playing time.
As another way, the correspondence between the playback content and the virtual reality scene picture may be configured in a configuration information file other than the video stream.
The configuration information file other than the video stream may be understood that the configuration information file is not transmitted to the electronic device along with the video stream, but may be downloaded to the electronic device in advance before the video stream is acquired from the network side, so that it may be detected in time whether the playing content of the acquired video corresponds to the virtual reality scene picture after the video is acquired. Illustratively, for videos of a tv series, each episode corresponds to a video, and each video corresponds to a configuration information file. And then, when the video corresponding to the first set is played, the configuration information files of the videos corresponding to the latter sets are obtained in advance, so that the user can be timely reminded of which playing contents correspond to the virtual reality scene pictures at the starting moment of playing the videos corresponding to the latter sets.
In the aspect of displaying the prompt information, the prompt information about the virtual reality scene pictures corresponding to all the playing contents of the video can be displayed at the starting time of video playing, or the prompt information can be displayed in the interface for playing the video when the current playing content of the video is detected to correspond to the virtual reality scene picture. It can be understood that, in the case of performing the prompt display when the current playing content of the video is detected to correspond to the virtual reality scene picture, the displayed prompt information prompts that the current playing content corresponds to the virtual reality scene picture.
It should be noted that, in the manner of carrying the corresponding relationship between the played content and the virtual reality scene picture based on the video stream itself, the video stream is kept in the video stream after the video stream is generated, and if the virtual reality scene picture is added or deleted, the video stream corresponding to the video needs to be regenerated, which further causes a certain waste of resources. For example, for some electronic devices, a played video is cached locally, and then a corresponding relationship between the played content of the video and a virtual reality scene picture is also cached locally directly, but a network end may have updated the virtual reality scene picture corresponding to the played content of the video, but because the video is cached offline, the prompt information is still displayed based on the corresponding relationship in the previously cached video stream when the video is continuously played next time, so that a user may not experience a latest virtual reality scene picture in time.
In order to improve the problem that the latest virtual reality scene picture cannot be experienced in time, the electronic device may display prompt information about the virtual reality scene picture by combining the two manners.
As a combination mode, after the network updates the playing content of the video, the configuration information file stored in the network can be updated in time. For example, the correspondence between the playing content stored in the history configuration information file and the virtual reality scene picture includes that the playing time t1 corresponds to xn1.mp4, and further includes that the playing time t2 corresponds to xn2.mp 4. In this case, if xn3.mp4 at the playing time t3 is newly added, the historical configuration information file may be updated accordingly, so that the corresponding relationship in the updated configuration information file may include xn3.mp4 at the playing time t3 in addition to xn1.mp4 at the playing time t1 and xn2.mp4 at the playing time t 2. It is understood that xn1, xn2 and xn3 can be understood as the scene identification information of the virtual reality scene picture.
Under the condition that the network end can update the configuration information file of the network end in time in the manner described above, the electronic device can periodically and actively acquire the latest configuration information file from the network end and replace the local historical configuration information file. Moreover, the network side can also actively push the updated configuration information file to the electronic device under the condition that the updated configuration information file is detected. As a mode, when the video played by the electronic device is a video that has been cached locally, the locally updated configuration information file may be read during the playing process, and the prompt information may be displayed based on the corresponding relationship in the updated configuration information file and the corresponding relationship in the original video stream. For example, if the correspondence relationship in the video stream of the video cached locally only includes that t1 corresponds to xn1.mp4 and that the playing time t2 corresponds to xn2.mp4, if the configuration information file corresponding to xn3.mp4 at the playing time t3 is already acquired locally, the prompt information about xn3.mp4 at the playing time t3 can still be displayed when the prompt information is displayed.
In this embodiment, the prompt message may be displayed in various ways.
As one mode, when the current playing content of the video corresponds to a virtual reality scene screen, in the case where the prompt information is displayed in the interface for playing the video, a prompt icon may be displayed as the prompt information in the interface for playing the video. Illustratively, as shown in fig. 7, the playback content shown in fig. 7 includes a designated area of a forest, and the playback content shown in fig. 7 also corresponds to a virtual reality scene picture corresponding to the forest. Alternatively, an icon 10 may be displayed as a prompt message in the interface shown in fig. 7. The content in the icon 10 is "VR". Optionally, the icon with content "VR" may be configured with a function of triggering playing of a virtual reality scene picture, and when the electronic device detects that the icon with content "VR" is touched, the electronic device may trigger playing of a virtual reality scene picture corresponding to the currently played content. For example, as shown in fig. 7, the designated area of a forest is currently displayed in fig. 7, and a virtual scene of the forest also corresponds to the playing time corresponding to the playing content shown in fig. 7, and further, when it is detected that the icon of the content "VR" is touched, the virtual scene of the forest is triggered to be played.
It should be noted that, as shown in the foregoing, the current playing content may be understood as the picture content corresponding to the current frame. The playing speed of each frame in the video is fast, and in the case that the current playing content is the picture content corresponding to the current frame, if the icon serving as the prompt information is displayed only when the playing content corresponding to the virtual reality scene picture is played, the user may not perceive that the corresponding virtual reality scene picture can be experienced in time, or even if the user perceives that the virtual reality scene picture cannot be switched in time. As one mode, when detecting that the current playing content corresponds to a virtual reality scene screen, the electronic device may pause the video playing and display an icon as a prompt message while pausing.
Optionally, a sub-interface may be displayed in the interface during the playing pause, and query information indicating whether to switch to the virtual reality scene picture is displayed in the sub-interface, and if the user selects the switching, the electronic device may start playing the virtual reality scene picture corresponding to the current playing content. If the user selects not to convert, the electronic equipment resumes playing the video and cancels displaying the icon as the prompt message. For example, as shown in fig. 8, when it is detected that the currently played content corresponds to a virtual reality scene screen, the electronic device may pause playing the video and display a play icon 14 that may trigger resuming the playing of the video. Furthermore, an icon 12 with "VR" content is displayed to prompt the user that the currently played content corresponds to a virtual reality scene picture, and a sub-interface 13 with "whether to switch to a forest virtual scene" content is also displayed. When the user selects the control in which the interface content is "conversion", the electronic device starts playing the virtual reality scene picture corresponding to the current playing content, and when the user selects the control in which the interface content is "non-conversion", the electronic device resumes playing the video and cancels displaying the icon 12 as the prompt information.
As another way, when it is detected that the playing content of the video corresponds to the virtual reality scene picture, a first prompt identifier is displayed at a target position of the progress bar of the video, where the playing content corresponding to the target position corresponds to the virtual reality scene picture. It should be noted that the progress bar of the video is used for controlling the playing progress of the video, where a position in the progress bar represents a playing time of the playing content, and a corresponding target position represents a playing time of the playing content corresponding to the virtual reality scene picture. For example, as shown in fig. 9, a progress bar 20 is displayed in an interface for playing a video, and a first prompt identifier 21 and a first prompt identifier 22 are correspondingly displayed on the progress bar 20, where the playing content corresponding to the target position where the first prompt identifier 21 and the first prompt identifier 22 are located corresponds to the virtual reality scene picture.
In this way, optionally, the selected first prompt identifier may be obtained as a first target prompt identifier, and the virtual reality scene picture corresponding to the target position where the first target prompt identifier is located is played. As shown in fig. 10, after detecting that the first prompt identifier 22 is touched, a corresponding prompt box 23 may be displayed, and in the prompt box 23, introduction information of a virtual reality scene picture corresponding to the target position where the first prompt identifier 22 is located and a trigger control 24, for example, a "forest virtual scene" therein, are displayed. When the touch control 24 is detected to be touched, it is determined that the first prompt identifier 22 is selected, and a virtual reality scene picture corresponding to a target position where the first target prompt identifier 22 is located may be played.
It should be noted that, in one mode, there may be multiple designated areas within the same playing content of a video. For example, when the broadcast content includes a forest and a sky at the same time, and the forest and the sky both belong to the designated area, the forest and the sky may both correspond to virtual reality scene pictures, and the broadcast content may correspond to two virtual reality scene pictures at the same time.
As a mode, when it is detected that the same playing content of the video corresponds to a plurality of virtual reality scene pictures, a second prompt identifier is displayed at a target position corresponding to the same playing content, and when it is detected that the second prompt identifier is selected, a plurality of prompt identifiers to be selected are displayed, wherein the prompt identifiers to be selected correspond to the virtual reality scene pictures one by one. And acquiring the selected prompt identifier to be selected as a second target prompt identifier, and playing the virtual reality scene picture corresponding to the target position of the second target prompt identifier.
Illustratively, as shown in fig. 11, when it is detected that the same playing content of the video corresponds to a plurality of virtual reality scene pictures, the second prompt identifier 30 is displayed at the target position corresponding to the same playing content. Optionally, in order to facilitate the user to distinguish the second prompt identifier from the first prompt identifier, the second prompt identifier and the first prompt identifier may be configured in different display modes. For example, the width of the second prompt identifier may be configured to be greater than the width of the first prompt identifier (e.g., second prompt identifier 30 in fig. 11 and first prompt identifier 22 in fig. 10). In addition, optionally, in addition to distinguishing the second prompt identifier from the first prompt identifier by the shape, the second prompt identifier and the first prompt identifier may be distinguished by configuring different interface colors corresponding to the second prompt identifier and the first prompt identifier, respectively. For example, the interface color of the first prompt identifier may be configured to be light blue, and the interface color of the second prompt identifier may be configured to be dark blue.
When it is detected that the second prompt identifier 30 is touched, the corresponding to-be-selected prompt identifier 31 and the to-be-selected prompt identifier 32 may be displayed. After the to-be-selected prompt identifier 31 and the to-be-selected prompt identifier 32 are displayed, it can be determined that the user is specifically the selected to-be-selected prompt identifier in the manner described above, and then the virtual reality scene picture corresponding to the selected to-be-selected prompt identifier at the target position is played. It should be noted that, under the condition that the playing content corresponding to the target position corresponds to both the virtual forest scene and the virtual sky scene, if the selected prompt to be selected is the prompt to be selected identifier 32, the virtual reality scene picture corresponding to the target position of the prompt to be selected identifier 32 is the virtual sky scene.
Besides, the display modes of the to-be-selected prompt identifier 31 and the to-be-selected prompt identifier 32 may be other display modes besides the mode shown in fig. 10. It should be noted that the screen of the electronic device usually has a relatively long side and a relatively short side. When the electronic device is currently in the landscape mode, the screen displays the video playing interface in full screen, and the progress bar extends along with a relatively long side of the screen (for example, the progress bar in fig. 9 to 10), whereas when the electronic device is currently in the portrait mode, only a partial area of the screen is used as the video playing interface, and the other partial area displays information such as the video name and the episode shown in fig. 12, and the progress bar extends along with a relatively short side of the screen in the case shown in fig. 12. In the case of the vertical screen mode shown in fig. 12, if a plurality of candidate prompt identifiers are still displayed in the manner shown in fig. 11, display congestion may be caused, which is not favorable for the user to select an interested candidate prompt identifier.
As a mode, in the process of displaying the to-be-selected prompt identifier, the current state of the electronic device may be detected, and when the electronic device is currently in the landscape mode, a plurality of to-be-selected prompt identifiers corresponding to the second prompt identifier are displayed along the extending direction of the progress bar. And when the electronic equipment is currently in a vertical screen mode, displaying a plurality of to-be-selected prompt identifiers corresponding to the second prompt identifiers along a direction perpendicular to the extending direction of the progress bar. For example, as shown in fig. 11, when the current electronic device is in a horizontal screen state (the electronic device performs full-screen playing in the horizontal screen state), and when detecting that the second prompt identifier is selected, a plurality of prompt identifiers to be selected corresponding to the second prompt identifier are displayed along an extending direction of the progress bar (i.e., a direction pointed by an arrow in fig. 11). As shown in fig. 13, when the electronic device is in the horizontal screen state, the multiple to-be-selected prompt identifiers corresponding to the second prompt identifier are displayed along a direction perpendicular to the extending direction of the progress bar (i.e., the direction pointed by the arrow in fig. 13), so that the electronic device can flexibly determine the display mode of the multiple to-be-selected prompt identifiers according to the size of the screen itself, and the flexibility of displaying the prompt identifiers is further improved.
S230: in the playing process of the video, a target scene picture is obtained in response to the mode switching instruction, the target scene picture is a virtual reality scene picture corresponding to the current playing content, and the virtual reality scene picture corresponds to a specified area included in the playing content.
The mode switching instruction is used for triggering the electronic equipment to acquire a target scene picture which is played in the following process. For example, referring to fig. 8, when the user selects the conversion shown in fig. 8, the electronic device generates a mode switching command to obtain the target scene in response to the mode switching command. For another example, referring to fig. 10 again, after the user touches the touch control 24 therein, the electronic device generates a mode switching instruction, and a target scene picture acquired by the electronic device in response to the mode switching instruction is a scene picture of the virtual forest scene shown in fig. 10. Similarly, in the cases shown in fig. 10 and 13, the generation of the mode switching instruction may also be triggered by the touch control.
The mode switching command may be generated in other ways than the above-described way.
Alternatively, in another approach, the electronic device may communicate (e.g., via bluetooth or infrared) with an external control device (e.g., a remote control). The control device may be configured with a control key, where the control key may be used to trigger the control device to send a mode switching instruction to an electronic device in the electronic device, and when the electronic device receives the mode switching instruction sent by the control device, a virtual reality scene picture corresponding to the current playing content may be acquired as a target scene picture. The control keys in the control device may be physical keys or touch keys.
Optionally, in another mode, a voice assistant is configured in the electronic device. The control device can send a voice assistant calling instruction to the electronic device, the electronic device can call the voice assistant to a foreground for operation after receiving the voice assistant calling instruction, and prompt information for prompting a user to further trigger the voice instruction can be displayed on a screen of the electronic device. The control key can be configured with a voice control key for triggering a voice assistant call instruction, so that a user can cause the control device to send the voice assistant call instruction by pressing or long-pressing the voice control key.
After the electronic device calls the voice assistant, the control device may send the further collected user voice to the electronic device, so that the voice assistant in the electronic device may recognize the user voice, and if the voice assistant in the electronic device determines that the voice content of the user is the content triggering playing of the virtual reality scene picture, a mode switching instruction is generated. The content of the triggered virtual reality scene picture may be "i want to experience the virtual reality scene", and the like.
Therefore, the user can switch to the VR scene through the control equipment (for example, a remote controller) or a voice assistant in the electronic equipment in the mode, and then the user can enjoy the extreme experience of 360 degrees without dead angles by wearing the corresponding virtual reality playing equipment.
It should be noted that not every playing content in the video corresponds to a virtual reality scene picture. As one mode, when receiving a mode switching instruction sent by the control device, the electronic device may detect whether the current playing content corresponds to a virtual reality scene picture in response to the mode switching instruction, and if the current playing content corresponds to the virtual reality scene picture, use the virtual reality scene picture corresponding to the current playing content as a target scene picture, and if the current playing content does not correspond to the virtual reality scene picture, may feed back feedback information that the current playing content does not correspond to the virtual reality scene picture. For example, when the current playing content does not correspond to the virtual reality scene screen, the feedback may be performed in the video playing interface in a manner of popping up the sub-interface, and the feedback information with the content of "the virtual reality scene interface does not exist at present" may be displayed in the sub-interface.
Furthermore, in the manner of triggering the generation of the mode switching instruction based on the control key, in the process of playing the target scene picture, touching the control key can trigger the exit of the playing instruction, so that the electronic device cancels the playing of the target scene picture and resumes the playing of the video.
Further, the external control device may also operate a menu in the electronic device as another way. In this way, an option for triggering switching to playing a virtual reality scene may be configured in a menu of the electronic device, and after the user operates the control device to select the option for switching to playing the virtual reality scene, the electronic device may start to detect whether the current playing content corresponds to a virtual reality scene picture, if the current playing content corresponds to the virtual reality scene picture, the virtual reality scene picture corresponding to the current playing content is taken as a target scene picture, and if the current playing content does not correspond to the virtual reality scene picture, the feedback information may be displayed in the manner as described above.
Still further, as yet another alternative, the external control device may also support voice input. In this way, the control device may generate a mode switching instruction when it is detected that similar voice contents such as "i want to experience a virtual reality scene" are input by the user, and detect whether the current playing content corresponds to a virtual reality scene picture in response to the mode switching instruction, if the current playing content corresponds to the virtual reality scene picture, the virtual reality scene picture corresponding to the current playing content is taken as a target scene picture, and if the current playing content does not correspond to the virtual reality scene picture, the feedback information may be displayed in the manner as described above.
S240: and playing the target scene picture.
It should be noted that, a progress mark may be configured in the progress bar to represent the current playing progress, and in order to facilitate the user to distinguish the first prompt mark, the second prompt mark, and the progress mark in this embodiment, optionally, the first prompt mark, the second prompt mark, and the progress mark may be configured to have different shapes respectively. As shown in fig. 14, the first prompt mark 21 and the second prompt mark 30 may be both configured to be square, the width of the second prompt mark 30 is configured to be larger than that of the first prompt mark 21, and the progress mark 25 is configured to be circular.
According to the video playing method provided by the embodiment, in the video playing process, the virtual reality scene picture corresponding to the designated area in the playing content can be triggered to be played by triggering the mode switching instruction, so that a user can better visually experience the designated area in the playing content, and the user experience of the user in the video watching process is improved. In addition, in this embodiment, in the playing process of the video, the user can more conveniently know which playing contents correspond to the virtual reality scene picture by displaying an icon on the interface where the video is played or displaying a prompt mark at a target position in the progress bar, so that the user can more conveniently select the virtual reality scene picture which is interested in the user to play.
Referring to fig. 15, fig. 15 is a flowchart illustrating a video playing method according to an embodiment of the present application, where the method includes:
s310: and playing the video.
S320: in the playing process of the video, a target scene picture is obtained in response to the mode switching instruction, the target scene picture is a virtual reality scene picture corresponding to the current playing content, and the virtual reality scene picture corresponds to a specified area included in the playing content.
S330: and when a plurality of target scene pictures are available, displaying the scene identification information corresponding to the plurality of target scene pictures respectively.
As shown in the foregoing, multiple virtual scene pictures may correspond to the same playing content of a video, so that when a target scene picture is obtained, multiple target scene pictures are detected. For example, if the playing content includes a forest and a sky in the designated area, the playing content may correspond to two target scene pictures, namely a sky virtual scene and a forest virtual scene. In this case, the scene identification information corresponding to each of the plurality of target scene pictures may be displayed in the interface, so as to allow the user to make a further selection.
For example, as shown in fig. 16, when the current playback content corresponds to two target scene pictures, namely, a sky virtual scene and a forest virtual scene, the sub-interface 50 may be displayed on the video playback interface, and the sky virtual scene and the forest virtual scene may be displayed on the sub-interface 50. It can be understood that the sky virtual scene and the forest virtual scene therein are the scene identification information of the respective corresponding virtual reality scene pictures. In addition, a prompt message for selecting from two target scene pictures, a sky virtual scene and a forest virtual scene, is also displayed on the sub-interface 50.
And if the fact that the user touches the virtual forest scene is detected, the virtual forest scene is used as the selected scene identification information.
S340: and acquiring a target scene picture corresponding to the selected scene identification information as a target scene picture to be played.
S350: and playing the target scene picture to be played.
According to the video playing method provided by the embodiment, in the video playing process, the virtual reality scene picture corresponding to the designated area in the playing content can be triggered to be played by triggering the mode switching instruction, so that a user can better visually experience the designated area in the playing content, and the user experience of the user in the video watching process is improved. In addition, in the embodiment, in the process of triggering to play the virtual reality scene picture, if a plurality of target scene pictures are detected, the final target scene picture to be played may be determined in a manner selected by the user.
Referring to fig. 17, fig. 17 is a flowchart illustrating a video playing method according to an embodiment of the present application, where the method includes:
s410: and playing the video.
S420: in the playing process of the video, a target scene picture is obtained in response to the mode switching instruction, the target scene picture is a virtual reality scene picture corresponding to the current playing content, and the virtual reality scene picture corresponds to a specified area included in the playing content.
S430: the video is paused to be played.
In one playing mode, the electronic device transmits the determined target scene picture to an external virtual reality playing device for playing. For example, in the case that the electronic device is a smart television, the smart television may transmit the target scene picture to VR glasses as a virtual reality playing device, so that the VR glasses may show the target scene picture to the user. In this way, the smart television may still keep displaying the picture of the video in S410, but during the process that the VR glasses are displaying the target scene picture, the user may not be concerned with the content currently being played by the smart television, so to avoid that the user needs to manually quit the video to the playing time when the playing of the target scene picture is triggered, the smart television may be paused before the target scene picture is transmitted to the VR glasses.
S440: and playing the target scene picture.
S450: and canceling the playing of the target scene picture when the play quitting instruction is acquired, and resuming the playing of the video.
The triggering mode of exiting the play command in different play modes is different.
As one way, playing the target scene picture is performed by the electronic device by invoking a local virtual reality player. In this manner, the target scene may be directly played at the interface where the video mentioned in S410 is played directly in the process of playing the target scene. Optionally, in a manner that the electronic device directly calls the virtual reality player to play the target scene picture, an end control for ending the playing may be configured in the playing interface or the menu bar, and when the electronic device detects that the end control is touched, an exit playing instruction may be generated.
Alternatively, playing the target scene picture is performed by an external virtual reality playing device. In this manner, the exit play instruction may be triggered by the control device mentioned in the foregoing embodiment. For example, in the process of playing the target scene picture, touching the control key may trigger an exit play instruction, so that the electronic device cancels the playing of the target scene picture and resumes the playing of the video.
According to the video playing method provided by the embodiment, in the video playing process, the virtual reality scene picture corresponding to the designated area in the playing content can be triggered to be played by triggering the mode switching instruction, so that a user can better visually experience the designated area in the playing content, and the user experience of the user in the video watching process is improved. In addition, in the embodiment, in the process of playing the target scene picture, the originally played video may be paused in response to the mode switching instruction, and in the process of playing the target scene picture, when the play quitting instruction is acquired, the playing of the target scene picture is cancelled, and the playing of the video is resumed, so that the automatic flexible switching between the video playing and the playing of the target scene picture is realized, and too many operations that need to be performed manually by the user are not increased.
Referring to fig. 18, fig. 18 is a flowchart illustrating a video playing method applied in a smart television according to an embodiment of the present application, where the method includes:
s510: and acquiring the film source selected by the user.
In this embodiment, a large number of film sources for the user to select may be available in the smart television, where the film source may be understood as a video in the foregoing embodiment, and after the electronic device is started, the video selection interface in the foregoing embodiment may be displayed, so that the user may select the film source in the video selection interface.
S520: and detecting whether the film source corresponds to a VR scene.
It should be noted that, a VR scene may be understood as a virtual reality scene picture, and then, in this embodiment, detecting whether a film source corresponds to a VR scene may be understood as detecting whether a playing content of a video corresponds to a virtual reality scene picture in the foregoing embodiment.
S521: and if the film source does not correspond to the VR scene, the VR scene cannot be played in the playing process of the film source.
If the fact that the playing content of the film source selected by the user does not have the corresponding virtual reality scene picture is detected, it is judged that the film source does not have the corresponding VR scene, and otherwise, if the fact that the playing content of the film source selected by the user corresponds to the virtual reality scene picture is detected, it is judged that the film source corresponds to the VR scene. The manner of detecting whether there is a virtual reality scene picture can refer to the content in the foregoing embodiments, and details are not described here.
S522: if the film source corresponds to the VR scene, the VR scene can be played in the playing process of the film source.
S530: the intelligent television is switched to a VR playing scene and starts a Bluetooth pairing mode.
When the film source corresponds to the VR scene, the smart television can be switched to the VR playing scene to start Bluetooth pairing with the virtual reality playing equipment.
S531: the virtual reality playing equipment is connected with the smart television through Bluetooth.
S532: and other virtual reality sensing equipment is connected with the intelligent television Bluetooth.
In this embodiment, some virtual reality sensing devices that increase the user's haptic experience may also be added. And then, under the condition that the intelligent television is switched to the VR playing scene, other virtual reality sensing equipment can be connected with the intelligent television through Bluetooth.
S540: and switching the film source.
After the bluetooth connection is established between the smart television and the virtual reality playing device, the electronic device will start to switch the processed video from the film source selected in S510 to the VR scene having the corresponding film source. The specific VR scene played can be determined according to the manner shown in the foregoing embodiments.
S550: the user wears the virtual reality playing device to experience the VR scene.
S560: the user voice input is switched back to the film source.
It should be noted that, in this embodiment, the control device may control the smart television, where the control device may listen to the received voice content to control the smart television. For example, after the control device receives a voice representing cancellation of playing of a VR scene input by a user, a play quitting instruction may be generated as in the foregoing embodiment, and the smart television may resume playing of the original film source.
S570: the bluetooth connection is ended.
After the electronic equipment in the intelligent television resumes playing the original film source, the intelligent television can be disconnected from the Bluetooth connection between the intelligent television and the virtual reality playing equipment and between the intelligent television and other virtual reality sensing equipment.
According to the video playing method provided by the embodiment, in the playing process of a video, after a film source selected by a user is obtained, whether the selected film source corresponds to a VR scene or not can be detected, and under the condition that the corresponding VR scene is detected, the state of playing the VR scene can be switched to and the Bluetooth connection with the virtual reality playing equipment can be established, so that under the condition that the virtual reality playing equipment is worn by the user, a virtual reality scene picture corresponding to a specified area in the film source is experienced through the virtual reality playing equipment, and further, the user experience in the playing process of the film source is improved. Moreover, when the user can trigger the smart television to return to the scene of playing the film source in a voice control mode.
Referring to fig. 19, fig. 19 is a diagram illustrating a video playing apparatus 600 according to an embodiment of the present application, where the apparatus 600 includes: a video playing unit 610, a virtual scene acquisition unit 620, and a virtual scene playing unit 630.
The video playing unit 610 is configured to play a video.
A virtual scene obtaining unit 620, configured to, in the playing process of the video, obtain a target scene picture in response to the mode switching instruction, where the target scene picture is a virtual reality scene picture corresponding to the current playing content, and the virtual reality scene picture corresponds to a specified area included in the playing content.
As one mode, the virtual scene obtaining unit 620 is specifically configured to, in a playing process of the video, obtain a virtual scene identifier corresponding to a currently playing content in the video in response to the mode switching instruction; and acquiring a virtual reality scene picture corresponding to the virtual scene identification as a target scene picture.
A virtual scene playing unit 630, configured to play the target scene picture.
As shown in fig. 20, the apparatus 600 further includes:
and the scene prompt unit 640 is configured to display prompt information in an interface for playing the video when the playing content of the video corresponds to the virtual reality scene picture. Optionally, the scene prompting unit 640 is specifically configured to display prompting information in an interface for playing the video when the current playing content of the video corresponds to a virtual reality scene picture.
As one mode, the scene prompt unit 640 is specifically configured to display prompt information in an interface for playing the video when the current playing content of the video corresponds to a virtual reality scene screen.
As one mode, the scene prompting unit 640 is specifically configured to display a first prompting identifier at a target position of a progress bar of the video when the playing content of the video corresponds to the virtual reality scene picture, where the playing content corresponding to the target position corresponds to the virtual reality scene picture. In this way, the virtual scene playing unit 630 is specifically configured to obtain the selected first prompt identifier as the first target prompt identifier; and playing the virtual reality scene picture corresponding to the target position of the first target prompt identifier.
As a mode, the scene prompting unit 640 is further configured to display a second prompting identifier at a target position corresponding to the same playing content when the same playing content of the video corresponds to multiple virtual reality scene pictures; and when the second prompt identification is selected, displaying a plurality of prompt identifications to be selected, which are in one-to-one correspondence with the plurality of virtual reality scene pictures. Correspondingly, in this way, the virtual scene playing unit 630 is specifically configured to acquire the selected prompt identifier to be selected as the second target prompt identifier, and play the virtual reality scene picture corresponding to the target position where the second target prompt identifier is located.
Alternatively, the scene prompt unit 640 is further configured to, when there are multiple target scene pictures, display scene identification information corresponding to each of the multiple target scene pictures. In this way, the virtual scene playing unit 630 is specifically configured to acquire a target scene picture corresponding to the selected scene identification information as a target scene picture to be played, and play the target scene picture to be played.
The virtual scene obtaining unit 620 is further configured to detect whether a current playing content corresponds to a virtual reality scene picture when the conversion request instruction is obtained; and if the current playing content corresponds to the virtual reality scene picture, generating the mode switching instruction. Optionally, the virtual scene obtaining unit 620 is specifically configured to receive the conversion request instruction sent by the control device.
As a manner, the virtual scene playing unit 630 is specifically configured to pause playing the video before playing the target scene picture, and is further specifically configured to cancel playing the target scene picture and resume playing the video when the play quitting instruction is acquired.
Optionally, the virtual scene playing unit 630 is specifically configured to transmit the target scene picture to a virtual reality playing device for playing.
It should be noted that the device embodiment and the method embodiment in the present application correspond to each other, and specific principles in the device embodiment may refer to the contents in the method embodiment, which is not described herein again.
An electronic device provided by the present application will be described below with reference to fig. 21.
Referring to fig. 21, based on the video playing method, another electronic device 200 including a processor 102 capable of executing the video playing method is further provided in the embodiment of the present application. Electronic device 200 also includes memory 104, network module 106, and screen 108. The memory 104 stores programs that can execute the content of the foregoing embodiments, and the processor 102 can execute the programs stored in the memory 104.
Processor 102 may include one or more cores for processing data, among other things. The processor 102 interfaces with various components throughout the electronic device 200 using various interfaces and circuitry to perform various functions of the electronic device 200 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 104 and invoking data stored in the memory 104. Alternatively, the processor 102 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 102 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 102, but may be implemented by a communication chip.
The Memory 104 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 104 may be used to store instructions, programs, code sets, or instruction sets. The memory 104 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal 100 in use, such as a phonebook, audio-video data, chat log data, and the like.
The network module 106 may be configured to receive and transmit electromagnetic waves, and implement interconversion between the electromagnetic waves and the electrical signals, so as to communicate with a communication network or other devices, for example, the network module 106 may transmit broadcast data, and may also analyze broadcast data transmitted by other devices. The network module 106 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. The network module 106 may communicate with various networks, such as the internet, an intranet, a wireless network, or with other devices via a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network.
The screen 108 may display interface content, for example, playing content of a video currently being played may be displayed.
It should be noted that, in order to implement more functions, the electronic device 200 may further include more components, for example, a structured light sensor for acquiring face information, or a camera for acquiring an iris may be protected.
Referring to fig. 22, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable medium 1100 has stored therein program code that can be called by a processor to perform the method described in the above-described method embodiments.
The computer-readable storage medium 1100 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 1100 includes a non-volatile computer-readable storage medium. The computer readable storage medium 1100 has storage space for program code 810 to perform any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 1110 may be compressed, for example, in a suitable form.
In summary, according to the video playing method, the video playing device, the electronic device, and the storage medium provided by the present application, when a mode switching instruction is detected to trigger during the playing of a video, a virtual reality scene picture corresponding to the current playing content can be obtained as a target scene picture in response to the mode switching instruction, and the target scene picture can be played. Therefore, by establishing the corresponding relation between the video playing content and the virtual reality scene picture corresponding to the designated area included by the playing content, the virtual reality scene picture corresponding to the designated area in the playing content can be triggered and played by triggering the mode switching instruction in the video playing process, the electronic equipment can quickly display the scene area in the video through the virtual display technology in the video playing process, the scene area display mode of the electronic equipment in the video playing process is enriched, the user can experience the designated area in the playing content in the visual sense better, and the user experience of the user in the video watching process is improved.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (15)

1. A video playback method, the method comprising:
video playing is carried out;
in the video playing process, responding to a mode switching instruction to obtain a target scene picture, wherein the target scene picture is a virtual reality scene picture corresponding to the current playing content, and the virtual reality scene picture corresponds to a specified area included by the playing content;
and playing the target scene picture.
2. The method according to claim 1, wherein before acquiring the target scene picture in response to the mode switching instruction during the playing of the video, the method further comprises:
and when the playing content of the video corresponds to the virtual reality scene picture, displaying prompt information in an interface for playing the video.
3. The method according to claim 2, wherein when the playing content of the video corresponds to a virtual reality scene picture, displaying a prompt message in an interface for playing the video comprises:
and when the current playing content of the video corresponds to the virtual reality scene picture, displaying prompt information in an interface for playing the video.
4. The method according to claim 2, wherein when the playing content of the video corresponds to a virtual reality scene picture, displaying a prompt message in an interface for playing the video comprises:
when the playing content of the video corresponds to the virtual reality scene picture, displaying a first prompt identifier at a target position of a progress bar of the video, wherein the playing content corresponding to the target position corresponds to the virtual reality scene picture.
5. The method of claim 4, wherein the target location is plural, the method further comprising:
acquiring a selected first prompt identifier as a first target prompt identifier;
and playing the virtual reality scene picture corresponding to the target position of the first target prompt identifier.
6. The method of claim 4, further comprising:
when the same playing content of the video corresponds to a plurality of virtual reality scene pictures, displaying a second prompt identifier at a target position corresponding to the same playing content;
when the second prompt identification is selected, displaying a plurality of prompt identifications to be selected, which are in one-to-one correspondence with the plurality of virtual reality scene pictures;
acquiring a selected prompt identifier to be selected as a second target prompt identifier;
and playing the virtual reality scene picture corresponding to the target position of the second target prompt identifier.
7. The method according to claim 1, wherein said playing said target scene picture further comprises:
when a plurality of target scene pictures are available, displaying scene identification information corresponding to the target scene pictures respectively;
acquiring a target scene picture corresponding to the selected scene identification information as a target scene picture to be played;
the playing the target scene picture includes:
and playing the target scene picture to be played.
8. The method according to claim 1, wherein before the acquiring the target scene picture in response to the mode switching instruction during the playing of the video, further comprises:
when a conversion request instruction is acquired, detecting whether the current playing content corresponds to a virtual reality scene picture;
and if the current playing content corresponds to the virtual reality scene picture, generating the mode switching instruction.
9. The method of claim 8, further comprising:
and receiving the conversion request instruction sent by the control equipment.
10. The method according to claim 1, wherein said playing said target scene picture further comprises: pausing the playing of the video;
after the playing the target scene picture, the method further comprises:
and canceling the playing of the target scene picture when a play quitting instruction is acquired, and resuming the playing of the video.
11. The method according to any one of claims 1 to 10, wherein said acquiring a target scene picture in response to a mode switching instruction during the playing of the video comprises:
in the video playing process, responding to a mode switching instruction to acquire a virtual scene identifier corresponding to the current playing content in the video;
and acquiring a virtual reality scene picture corresponding to the virtual scene identification as a target scene picture.
12. The method of claim 1, wherein the playing the target scene picture comprises:
and transmitting the target scene picture to virtual reality playing equipment for playing.
13. A video playback apparatus, comprising:
the video playing unit is used for playing videos;
a virtual scene obtaining unit, configured to obtain a target scene picture in response to a mode switching instruction in a playing process of the video, where the target scene picture is a virtual reality scene picture corresponding to current playing content, and the virtual reality scene picture corresponds to a specified area included in the playing content;
and the virtual scene playing unit is used for playing the target scene picture.
14. An electronic device comprising a processor and a memory; one or more programs are stored in the memory and configured to be executed by the processor to implement the method of any of claims 1-12.
15. A computer-readable storage medium, having program code stored therein, wherein the program code when executed by a processor performs the method of any of claims 1-12.
CN202010502116.0A 2020-06-04 2020-06-04 Video playing method and device, electronic equipment and storage medium Pending CN111683281A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010502116.0A CN111683281A (en) 2020-06-04 2020-06-04 Video playing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010502116.0A CN111683281A (en) 2020-06-04 2020-06-04 Video playing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111683281A true CN111683281A (en) 2020-09-18

Family

ID=72453411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010502116.0A Pending CN111683281A (en) 2020-06-04 2020-06-04 Video playing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111683281A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330996A (en) * 2020-11-13 2021-02-05 北京安博盛赢教育科技有限责任公司 Control method, device, medium and electronic equipment for live broadcast teaching
CN113325955A (en) * 2021-06-10 2021-08-31 深圳市移卡科技有限公司 Virtual reality scene switching method, virtual reality device and readable storage medium
CN114327032A (en) * 2021-02-08 2022-04-12 海信视像科技股份有限公司 Virtual reality equipment and VR (virtual reality) picture display method
CN114679608A (en) * 2022-04-11 2022-06-28 武汉博晟安全技术股份有限公司 VR video encryption playing method, server, user side and system
CN115022721A (en) * 2022-05-31 2022-09-06 北京达佳互联信息技术有限公司 Content display method and device, electronic equipment and storage medium
CN115567758A (en) * 2022-09-30 2023-01-03 联想(北京)有限公司 Processing method, processing device, electronic equipment and storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105812768A (en) * 2016-03-18 2016-07-27 深圳市维尚境界显示技术有限公司 Method and system for playing 3D video in VR (Virtual Reality) device
CN106060520A (en) * 2016-04-15 2016-10-26 深圳超多维光电子有限公司 Display mode switching method, display mode switching device, and intelligent terminal
US20170078654A1 (en) * 2015-09-10 2017-03-16 Yahoo! Inc Methods and systems for generating and providing immersive 3d displays
CN106791779A (en) * 2016-12-13 2017-05-31 深圳市潘多拉虚拟与现实科技有限公司 A kind of video player and image display method, system
CN107438179A (en) * 2016-05-27 2017-12-05 腾讯科技(北京)有限公司 A kind of information processing method and terminal
US20180150204A1 (en) * 2016-11-30 2018-05-31 Google Inc. Switching of active objects in an augmented and/or virtual reality environment
CN109557998A (en) * 2017-09-25 2019-04-02 腾讯科技(深圳)有限公司 Information interacting method, device, storage medium and electronic device
CN109561333A (en) * 2017-09-27 2019-04-02 腾讯科技(深圳)有限公司 Video broadcasting method, device, storage medium and computer equipment
US20190188450A1 (en) * 2017-11-06 2019-06-20 Magical Technologies, Llc Systems, Methods and Apparatuses for Deployment of Virtual Objects Based on Content Segment Consumed in a Target Environment
CN109936736A (en) * 2017-12-19 2019-06-25 深圳Tcl新技术有限公司 A kind of method, storage medium and smart television automatically switching 3D mode
CN110176077A (en) * 2019-05-23 2019-08-27 北京悉见科技有限公司 The method, apparatus and computer storage medium that augmented reality is taken pictures
KR20190125565A (en) * 2018-04-30 2019-11-07 주식회사 에이비씨스튜디오 Method for providing virtual reality tour and record media recorded program for implement thereof
CN110876035A (en) * 2018-08-31 2020-03-10 杭州海康威视系统技术有限公司 Scene updating method and device based on video and electronic equipment
CN111064946A (en) * 2019-12-04 2020-04-24 广东康云科技有限公司 Video fusion method, system, device and storage medium based on indoor scene

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170078654A1 (en) * 2015-09-10 2017-03-16 Yahoo! Inc Methods and systems for generating and providing immersive 3d displays
CN105812768A (en) * 2016-03-18 2016-07-27 深圳市维尚境界显示技术有限公司 Method and system for playing 3D video in VR (Virtual Reality) device
CN106060520A (en) * 2016-04-15 2016-10-26 深圳超多维光电子有限公司 Display mode switching method, display mode switching device, and intelligent terminal
CN107438179A (en) * 2016-05-27 2017-12-05 腾讯科技(北京)有限公司 A kind of information processing method and terminal
US20180150204A1 (en) * 2016-11-30 2018-05-31 Google Inc. Switching of active objects in an augmented and/or virtual reality environment
CN106791779A (en) * 2016-12-13 2017-05-31 深圳市潘多拉虚拟与现实科技有限公司 A kind of video player and image display method, system
CN109557998A (en) * 2017-09-25 2019-04-02 腾讯科技(深圳)有限公司 Information interacting method, device, storage medium and electronic device
CN109561333A (en) * 2017-09-27 2019-04-02 腾讯科技(深圳)有限公司 Video broadcasting method, device, storage medium and computer equipment
US20190188450A1 (en) * 2017-11-06 2019-06-20 Magical Technologies, Llc Systems, Methods and Apparatuses for Deployment of Virtual Objects Based on Content Segment Consumed in a Target Environment
CN109936736A (en) * 2017-12-19 2019-06-25 深圳Tcl新技术有限公司 A kind of method, storage medium and smart television automatically switching 3D mode
KR20190125565A (en) * 2018-04-30 2019-11-07 주식회사 에이비씨스튜디오 Method for providing virtual reality tour and record media recorded program for implement thereof
CN110876035A (en) * 2018-08-31 2020-03-10 杭州海康威视系统技术有限公司 Scene updating method and device based on video and electronic equipment
CN110176077A (en) * 2019-05-23 2019-08-27 北京悉见科技有限公司 The method, apparatus and computer storage medium that augmented reality is taken pictures
CN111064946A (en) * 2019-12-04 2020-04-24 广东康云科技有限公司 Video fusion method, system, device and storage medium based on indoor scene

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330996A (en) * 2020-11-13 2021-02-05 北京安博盛赢教育科技有限责任公司 Control method, device, medium and electronic equipment for live broadcast teaching
CN114327032A (en) * 2021-02-08 2022-04-12 海信视像科技股份有限公司 Virtual reality equipment and VR (virtual reality) picture display method
CN113325955A (en) * 2021-06-10 2021-08-31 深圳市移卡科技有限公司 Virtual reality scene switching method, virtual reality device and readable storage medium
CN114679608A (en) * 2022-04-11 2022-06-28 武汉博晟安全技术股份有限公司 VR video encryption playing method, server, user side and system
CN114679608B (en) * 2022-04-11 2023-08-25 武汉博晟安全技术股份有限公司 VR video encryption playing method, server, user, system, electronic device and medium
CN115022721A (en) * 2022-05-31 2022-09-06 北京达佳互联信息技术有限公司 Content display method and device, electronic equipment and storage medium
CN115022721B (en) * 2022-05-31 2023-11-21 北京达佳互联信息技术有限公司 Content display method and device, electronic equipment and storage medium
CN115567758A (en) * 2022-09-30 2023-01-03 联想(北京)有限公司 Processing method, processing device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111683281A (en) Video playing method and device, electronic equipment and storage medium
CN106254311B (en) Live broadcast method and device and live broadcast data stream display method and device
CN111277884B (en) Video playing method and device
US20160295269A1 (en) Information pushing method, device and system
CN111491197B (en) Live content display method and device and storage medium
CN111866433B (en) Video source switching method, video source playing method, video source switching device, video source playing device, video source equipment and storage medium
US10397647B2 (en) System and method for delivering interactive trigger events
CN112905289A (en) Application picture display method, device, terminal, screen projection system and medium
CN111182335B (en) Streaming media processing method, device, equipment and computer readable storage medium
CN112995759A (en) Interactive service processing method, system, device, equipment and storage medium
CN114466209A (en) Live broadcast interaction method and device, electronic equipment, storage medium and program product
CN112969093B (en) Interactive service processing method, device, equipment and storage medium
CN113965813B (en) Video playing method, system, equipment and medium in live broadcasting room
CN114025180A (en) Game operation synchronization system, method, device, equipment and storage medium
US20080254829A1 (en) Control Apparatus, Mobile Communications System, and Communications Terminal
CN112791385A (en) Game running method and device, control equipment and server
CN113382295A (en) Remote control method, television and electronic equipment
WO2023011021A1 (en) Live picture display method and apparatus, storage medium, and electronic device
CN111953838B (en) Call dialing method, display device and mobile terminal
JP6219531B2 (en) Television program image frame capture device, television program image frame acquisition device, system and method
CN112181344A (en) Device calling method, device calling apparatus, interaction system, electronic device, and storage medium
CN113556716B (en) Image content sharing method and device and head-mounted display equipment
CN109451361B (en) Code stream definition switching method and device for Android system, terminal and readable medium
CN114594881B (en) Message display method and device
US20230276085A1 (en) Server, information processing system, storage medium, and transmission method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40028096

Country of ref document: HK

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200918