WO2022088908A1 - 视频播放方法、装置、电子设备及存储介质 - Google Patents

视频播放方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2022088908A1
WO2022088908A1 PCT/CN2021/115208 CN2021115208W WO2022088908A1 WO 2022088908 A1 WO2022088908 A1 WO 2022088908A1 CN 2021115208 W CN2021115208 W CN 2021115208W WO 2022088908 A1 WO2022088908 A1 WO 2022088908A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
target
image
target video
played
Prior art date
Application number
PCT/CN2021/115208
Other languages
English (en)
French (fr)
Inventor
吕晴阳
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Priority to US18/250,505 priority Critical patent/US20240062479A1/en
Publication of WO2022088908A1 publication Critical patent/WO2022088908A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/434Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
    • H04N21/4341Demultiplexing of audio and video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks

Definitions

  • the embodiments of the present disclosure relate to the field of computers, and in particular, to a video playback method, apparatus, electronic device, storage medium, computer program product, and computer program.
  • Augmented Reality (AR) technology is a technology that skillfully integrates virtual information with the real world.
  • Information presentation using augmented reality has become a possible information presentation method.
  • a 3D modeling image of a virtual character or a virtual scene to be presented when a user uses a terminal to capture a real scene, the captured image of the real scene including the 3D modeling image can be simultaneously obtained.
  • embodiments of the present disclosure provide a video playback method, apparatus, electronic device, storage medium, computer program product, and computer program.
  • an embodiment of the present disclosure provides a video playback method, including:
  • a target video associated with the target image is acquired, and the target video is played at the display position of the target image in the live-action image.
  • an embodiment of the present disclosure provides a video playback device, comprising:
  • a processing module configured to obtain a real-scene shooting image, detect a target image in the real-scene shooting image, and determine a display position of the target image in the real-scene shooting image
  • a playing module is configured to acquire a target video associated with the target image, and play the target video at the display position of the target image in the real-life shot image.
  • embodiments of the present disclosure provide an electronic device, including: at least one processor and a memory;
  • the memory stores computer-executable instructions
  • the at least one processor executes the computer-executable instructions stored in the memory, so that the at least one processor executes the video playback method described in the first aspect and various possible designs of the first aspect above.
  • embodiments of the present disclosure provide a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and when a processor executes the computer-executable instructions, the first aspect and the first The video playback method described in the aspect of various possible designs.
  • embodiments of the present disclosure provide a computer program product, including a computer program that, when executed by a processor, implements the video playback method described in the first aspect and various possible designs of the first aspect.
  • embodiments of the present disclosure provide a computer program, which, when executed by a processor, is used to implement the video playback method described in the first aspect and various possible designs of the first aspect.
  • Embodiments of the present disclosure provide a video playback method, device, electronic device, storage medium, computer program product, and computer program.
  • the method includes: obtaining a live-action shot image, detecting a target image in the live-action shot image; determining the target the display position of the image in the live-action shot image; acquire a target video associated with the target image, and play the target video at the display position of the target image in the live-action shot image.
  • the video playback method provided in this embodiment can reduce the presentation cost and preparation period when using the augmented reality display technology to present information, and on the other hand, it also provides users with more presentation channels for presenting video information. Enable users to get a better interactive experience and visual experience.
  • FIG. 1 is a schematic diagram of a network architecture on which an embodiment of the disclosure is based;
  • FIG. 2 is a schematic flowchart of a video playback method according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a first interface of a video playback method according to an embodiment of the present disclosure
  • FIG. 4 is a signaling interaction diagram of a video playback method according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of a second interface of a video playback method provided by an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of a third interface of a video playback method provided by an embodiment of the present disclosure.
  • FIG. 7 is a schematic flowchart of another video playback method provided by an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of a fourth interface of a video playback method provided by an embodiment of the present disclosure.
  • FIG. 9 is a structural block diagram of a video playback device provided by an embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure.
  • Augmented Reality (AR) technology is a technology that skillfully integrates virtual information with the real world.
  • the terminal When displaying the augmented reality, the terminal will first shoot the real scene of the real scene to obtain the current real scene shooting image. Then, the augmented reality technology is used to process the real-scene captured image, so as to superimpose the preset virtual information on the real-scene captured image, and present the superimposed image to the user.
  • the virtual information superimposed on the real shot image is generally a pre-established 3D modeling image of a virtual character and a virtual scene.
  • the construction process of the 3D modeling image is relatively complicated, and the technical cost and labor cost required for construction are relatively high. This will result in a longer period and higher cost for presenting each piece of information when information is presented using the augmented reality display technology.
  • the inventor creatively found after research that the information presentation method is not only limited to 3D modeling images, but can also be replaced by a method with lower technical and labor costs.
  • some existing video data is placed in the real-life shot image, and displayed by means of augmented reality display. In this way, it is possible to reduce the preparation cycle and cost when using the augmented reality display technology to present information.
  • it also provides users with more presentation channels to present video information, so that users can get better interactive and visual experience.
  • FIG. 1 is a schematic diagram of a network architecture on which an embodiment of the disclosure is based.
  • the network architecture shown in FIG. 1 may specifically include a terminal 1 and a server 2 .
  • the terminal 1 may specifically be a user's mobile phone, a smart home device, a tablet computer, a wearable device, or other hardware devices that can be used to capture and display the real scene.
  • the device is hardware or software for executing the video playback method of the present disclosure.
  • the video playback device can provide the terminal 1 with an augmented reality display page, and the terminal 1 uses its screen or display components to display to the user the information provided by the video playback device. Showcase page for augmented reality display.
  • the server 2 may specifically be a server or server cluster set in the cloud, and the server or server cluster may store video data, image data, etc. related to the video playback method provided by the present disclosure.
  • the video playback device may also utilize the network components of the terminal 1 to interact with the server 2, acquire image data and video data stored in the server 2, and perform corresponding processing and display .
  • the architecture shown in FIG. 1 is applicable to the field of information presentation, in other words, it can be used for information presentation in various scenarios.
  • the video playback method provided by the present disclosure can be applied to game scenarios based on augmented reality display.
  • the video playback method provided by the present disclosure can be used to achieve For the push and presentation of "clue” videos during the "treasure hunt” process.
  • the video playback method provided by the present disclosure can be applied to an advertising scenario based on augmented reality display.
  • the video playback method provided by the present disclosure can be used to realize the presentation of related videos for these commodities, thereby Provide users with more information about the product to improve user experience.
  • the video playback method provided by the present disclosure can also be used to play video information, so as to present more information about the scene to the user and increase the user's interactive experience
  • the terminal camera needs to be turned on for real-time shooting. for the presentation of information.
  • FIG. 2 is a schematic flowchart of a video playback method provided by an embodiment of the present disclosure.
  • a video playback method provided by an embodiment of the present disclosure includes:
  • Step 101 Obtain a real-life captured image, and detect a target image in the real-world captured image;
  • Step 102 determining the display position of the target image in the live-action shot image
  • Step 103 Acquire a target video associated with the target image, and play the target video at the display position of the target image in the live-action captured image.
  • the execution body of the processing method provided in this embodiment is the aforementioned video playback device, and in some embodiments of the present disclosure, it specifically refers to a client or a display terminal that can be installed or integrated on a terminal.
  • the user can operate the video playback device through the terminal, so that the video playback device can respond to the operation triggered by the user.
  • FIG. 3 is a schematic diagram of a first interface of a video playback method provided by an embodiment of the present disclosure.
  • the video playback device will obtain a real-life captured image, which may be an image obtained by the terminal calling its own capturing component to capture the current environment, or may be obtained by the video playback device through other means real-time images of the scene.
  • the video playback device will perform image recognition in the real-life shot image to determine whether there is a target image that can be used for video playback in the live-action shot image.
  • the recognition of the target image in the real-life shot image by the video playback device can be realized by the image recognition technology.
  • the target image may be a two-dimensional plane image, and the corresponding display position may be the position where the two-dimensional plane graphic is located.
  • the target image may also be an image of a three-dimensional object, and the corresponding display position may be a projection position of the three-dimensional object on a two-dimensional plane, and so on.
  • the image recognition technology according to the embodiment of the present disclosure can be implemented based on the two-dimensional image recognition technology, that is, by using the image recognition technology, the image recognition technology can be used for the projection surface including the preset plane picture, the three-dimensional object, and the plane picture or plane with certain deformation. Image for image recognition.
  • the embodiments according to the present disclosure can be implemented by using an object recognition technology.
  • the present disclosure does not limit the specific image recognition technology.
  • the video playback device can detect the display position of the target image in the real-world captured image in the real-world captured image.
  • the target image may include the image in the image frame described in FIG. 3 , and correspondingly, the display position may be framed by means of the image frame 301 , and the image frame 301 is used to represent the image frame of the target video. play area.
  • the display position of the target image may be specifically determined by the image position of the target image, and the image position may include, but is not limited to, the image edge position of the target image, the image vertex position, and the like.
  • the video playback device may determine a target video associated with the target image, and play the target video at the display position where the target image is located in the live-action captured image.
  • the corresponding play area in the live-action shot image may be determined first according to the display position, and then video preprocessing is performed on the target video according to the play area, and the target video is played in the play area.
  • the above-mentioned display position refers to the range occupied by the image of the target image in the real-life shot image, which may include the position of the edge of the image, or/and the position of the vertex of the image.
  • the corresponding image frame can be divided in the real-life shot image according to the image edge position of the target image, or/and the image vertex position, as the play area (301 as shown in FIG. 3 ).
  • three-dimensional space rendering processing may be performed on the video data of the target video according to the spatial characteristics of the playback area in the real-life shot image, so as to be played in the playback area (301 shown in FIG. 3 ).
  • the target video (302 shown in Figure 3).
  • the spatial feature is used to represent the spatial position attribute of the playback area in the three-dimensional space of the live-action image, such as the spatial position coordinates of the vertices of the playback area, the spatial position coordinates of the edge of the playback area, the plane where the playback area is located and the shooting surface of the live-action image. space angle information, etc.
  • three-dimensional space rendering processing can be performed on the video data of the target video, so that the rendered video picture of the target video can fit the play area in three-dimensional space.
  • the three-dimensional space rendering processing may include pixel coordinate mapping processing, that is, the two-dimensional pixel coordinates of each pixel in the video image of the target video are mapped to the three-dimensional coordinates of the playback area by spatial mapping.
  • the three-dimensional space rendering process may also be implemented in other existing manners, which are not limited in this application.
  • the association relationship between the target video and the target image is further established in advance and stored in the aforementioned server.
  • FIG. 4 is a signaling interaction diagram of a video playback method provided by an embodiment of the present disclosure.
  • the association relationship between the target image and the target video is pre-built in the server, and the pre-construction method of the association relationship can be seen in Figure 4:
  • the terminal can obtain images and videos through various channels, and then upload the target images and videos to be associated to the server through an interface (as shown in FIG. 5 ).
  • the acquisition method of the pictures and videos may be acquired by the terminal shooting, or may be downloaded by the terminal through the network, or acquired from other terminals by means of near-field transmission, which is not limited in the present disclosure.
  • the server will associate and store the two to determine the association relationship between the two.
  • a storage list of association relationships can be pre-stored in the server to store the association relationships between images and videos uploaded by different terminals to be associated. The specific storage method will be described in the following embodiments. There is no restriction on this.
  • the terminal After the terminal starts the camera, it will execute the video playback method in the aforementioned manner, that is, the terminal will shoot the real scene, obtain the corresponding real scene shooting image, and then perform image recognition on the real scene shooting image to obtain the real scene shooting image. Then, determine the display position of the target image and send the target image to the server.
  • the server will determine the target video corresponding to the target image according to the pre-established association relationship in the storage list, and send the target video to the terminal. Finally, after the terminal receives the target video, the terminal will play the target video at the display position of the target image in the real-life shot image.
  • FIG. 4 shows that the terminal for uploading the target image to be associated and the target video is the same terminal as the terminal that uses this solution to play the video
  • the uploading target image to be associated and the target video are the same terminal.
  • the terminal of the target video and the terminal that uses this solution to play the video may also be different terminals. That is, this application does not impose any restrictions on whether the terminal that uploads the target image and target video to be associated and the terminal that uses this solution to play the video are the same terminal, and those skilled in the art can determine by themselves based on the actual scene.
  • the target video associated with the target image can be played directly, without the need for modeling processing of 3D virtual modeling, therefore, for all information that needs to be presented For the user, the following operations can be used to quickly associate the target image and the target video, so that more users can obtain the information that the information owner wants to present, and the cost and preparation difficulty are greatly reduced.
  • FIG. 5 is a schematic diagram of a second interface of a video playback method provided by an embodiment of the present disclosure.
  • the user information owner
  • the video playback device will determine the target image and target video to be associated in response to the uploading operation triggered by the user (information owner).
  • the video playback device will upload the target image and the target video to be associated to the server, so that the server can associate and store the target image and the target video to be associated.
  • identification IDs can be set for images and videos respectively, and the target image and the target video to be associated are stored by storing the two identification IDs correspondingly. That is to say, the specific implementation of the terminal sending the target image to the server can be that the terminal sends the identification ID of the target image to the server, so that the server can find the target with the corresponding identification ID from a large number of pre-stored images according to the identification ID of the target image. image, and send the corresponding target video.
  • the target image and target video to be associated can also be encrypted and decrypted with a symmetric key for storage.
  • the target image can be processed to obtain a unique key. , and use this key to encrypt and store the target video associated with the target image; in subsequent use, the target image can be processed again to obtain the aforementioned unique key, which can be used to extract several videos from the stored Find a video that it can decrypt, which will be the target video.
  • the user After completing the above configuration of the target image and target video, the user (information receiver) will be able to view the target video (video 3) associated with the target image (image 2) on the terminal through the methods provided in the foregoing embodiments. ).
  • the terminal When in use, after sending the detected target image to the server, the terminal will receive the target video returned by the server and associated with the target image.
  • the terminal can send the image data of the target image to the server, and can also analyze and process the image data of the target image to obtain the image identification ID of the target image and/or the target image. image features and send them to the server.
  • the server can determine the corresponding target video according to the pre-established association relationship, and send the corresponding target video to the terminal for display.
  • the user when receiving the target image to be associated and the target video uploaded by the user, the user can simultaneously upload at least one group of the target image and target video to be associated.
  • different groups of target images and target videos to be associated can be associated and marked with different identifiers, so that the server can associate and store different groups of target images and target videos to be associated.
  • the configuration efficiency of the target image and target video to be associated can be greatly improved.
  • video information presentation for multiple target images can be implemented, which can further increase the user's interactive experience.
  • the user who uploads the target image and the target video to be associated can be the same or different from the user who uses this solution to play the video.
  • the user who uploads the target image and target video to be associated can specifically be a product promoter, or a commodity Promoters of videos; users who use this solution to play videos can specifically be product users, product video recipients or viewers, etc.
  • FIG. 6 is a schematic diagram of a third interface of a video playback method provided by an embodiment of the present disclosure.
  • the target image may include an image of a three-dimensional object in the live-action image.
  • the two-dimensional plane formed by the projection of the surface of the three-dimensional object can be used as the display position of the target image, and the video associated with the target image is displayed at the display position.
  • the video associated with the vase will be played on the projection plane of the vase.
  • the video playback device will continue to track the display position of the target image, so as to be able to adjust the playback of the target video accordingly according to the change of the display position of the target image position, so that the target video is played at the real-time display position of the target image. Therefore, as shown in FIG. 6 , the target image is the vase in FIG. 6 as an example for description. When the shooting angle of the live shot image changes (for example, the angle of shooting the vase changes), the display position of the identified target image will also change accordingly. At this time, the video playback device will adjust the position and the position of the image frame in real time. size and display the target video in the resized image box (play area).
  • the side surface (projection surface) of the target image vase (three-dimensional object) is photographed at a vertical angle.
  • the display position of the target image vase can be identified, and the display position Then, rotate the camera to adjust the shooting angle.
  • the display position of the side surface (projection surface) of the target image vase (three-dimensional object) has been adjusted to a certain extent, and the The corresponding target video is displayed at the adjusted display position.
  • the user may perform a trigger operation on the target video, and the display interface of the video playback device will be switched so that the information associated with the target video is displayed. That is, the video playback apparatus will, in response to the user's triggering operation on the target video being played, determine the triggered information associated with the target video, and display the information.
  • the information associated with the target video may be web page information, other image information, program information of other application programs, and the like.
  • the target image is an image of a product
  • the target video shows how to use a product of a certain brand
  • the associated information can be a webpage introduction of the product of the brand, or the purchase of the product of the brand in an online store of an application.
  • the target image is an image of a scenic spot
  • the target video is a promotional video of a scenic spot
  • the associated information can be more scenic spots pictures of the scenic spot, or the page introduction of the scenic spot on the official website, and also It can be the ticket purchase information of the attraction and so on.
  • the video playback device also supports breakpoint playback of the target video, that is, when the display position of the target image corresponding to a certain target video is lost in the live-action captured image (for example, at this time, the target video plays to 00':30"), then when the display position of the target image is retrieved within a preset time period, the video playback device can continue to play the target video from the last playback progress (ie 00':30") of the target video. The target video is played (continue from 00':30").
  • the video playback device when playing the target video, the video playback device will also acquire the playback progress of the target video; and play the target video according to the playback progress.
  • the video playback device may store the playback progress of the target video, such as current playback time information, and the like. Each time the target video is played, the video playback device can first extract the current playback time information of the target video and other playback progress, then determine the start time of the current playback, and finally play the target video from the start time. In this way, the breakpoint playback function for the target video is realized, and the user's audiovisual experience is further improved.
  • the number of target images obtained by recognizing the real-life shot images may be multiple. That is to say, when the video playback device performs image recognition on a real-life shot image, it can recognize multiple target images in the image, and simultaneously play the target videos of the multiple target images.
  • FIG. 7 is a schematic flowchart of another video playback method provided by an embodiment of the present disclosure. As shown in FIG. 7 , the method includes:
  • Step 201 obtaining a real-scene photographed image
  • Step 202 determining the display position of each target image in the live-action shot image
  • Step 203 according to the acquisition sequence of the target videos associated with the plurality of target images, store the target videos associated with the plurality of target images in a preset video playlist;
  • Step 204 according to the storage order of each target video in the video playlist, determine at least one target video from the video playlist as the target video to be played;
  • Step 205 Play the target video to be played at the display position of the target image associated with the target video to be played.
  • the execution body of the processing method provided in this embodiment is the aforementioned video playback device, and in some embodiments of the present disclosure, it specifically refers to a client or a display terminal that can be installed or integrated on a terminal.
  • the user can operate the video playback device through the terminal, so that the video playback device can respond to the operation triggered by the user.
  • a plurality of target images will be included in the real-scene shooting image, and the video playback device will determine the display position of each target image in the real-scene shooting image, and each target image will be displayed.
  • the associated target video is played at its corresponding display position.
  • the number of target videos may be large. If a large number of target videos are played at the same time, the terminal may freeze. Therefore, in order to obtain a better video playback effect and to bring a better visual experience to the user, a video playlist may be set in the video playback device, which is used to set the playback in the real-life captured images at the same time. number of target videos.
  • FIG. 8 is a schematic diagram of a fourth interface of a video playback method provided by an embodiment of the present disclosure.
  • the video playback device obtains a live-action shot image, then performs image recognition on multiple target images in the live-action shot image, and uses a plurality of image frames to display each target image in the live-action shot image The positions are framed in turn to obtain several image frames (playing areas).
  • the playback area corresponding to the target image A is 801
  • the playback area corresponding to the target image B is 802
  • the playback area corresponding to the target image C is 803 .
  • the video playback device will send each target image to the server, so that the server can determine the target video corresponding to each target image according to the preset association relationship, and return each target video to the video playback device.
  • the video playback device After receiving the target videos, the video playback device stores the target videos in a video playlist.
  • the video playback device will select one or more target videos as the target video to be played according to the storage order of the target videos stored in the video playlist, and put each target video in its associated Play in the playback area of the target image, for example, the target video 1 associated with the target image A is played in the corresponding playback area 801, and the target video 2 associated with the target image B is played in the corresponding playback area 802. , the target video 3 associated with the target image C is played in the corresponding play area 803 .
  • the video playlist stores all the target videos obtained from the server from the moment when the real-life shooting images are obtained.
  • the video playlist can also determine the cleaning period of the list, and according to the cleaning period, the targets stored in the list are sorted. Videos are cleaned up.
  • the video playlist also stores the acquisition time of the target video.
  • the target videos are stored in the video playlist according to the reverse order of the acquisition time of each target video. . In other words, the closer to the target video acquired at the current moment, the more it will be stored at the top of the video playlist, and will be played preferentially.
  • a manner of playing audio and video separately may be adopted. For example, while playing the video images of multiple target videos to be played simultaneously, only the audio data of the only target video being played is played.
  • each target video to be played when playing the target video, can be decoded first to obtain the audio data and video data of each target video to be played; the video data of each target video to be played is played; The audio data of the target video to be played recently acquired and stored in the video playlist.
  • the audio data and the video data will be synchronously processed by using the audio and video synchronization technology to ensure the audio and video synchronization during playback.
  • the user can only receive the sound from the same target video at the same time, and watch the pictures of multiple target videos.
  • the audio-visual of the video information can be guaranteed. The experience is not affected.
  • the video playback method provided by the embodiment of the present disclosure includes: obtaining a real-life captured image; detecting a target image in the real-life captured image; determining a display position of the target image in the real-life captured image; and play the target video at the display position of the target image in the live-action image.
  • the video playback method provided in this embodiment can reduce the preparation period and cost when using the augmented reality display technology to present information. Users get better interactive experience and visual experience.
  • FIG. 9 is a structural block diagram of a video playback apparatus provided by an embodiment of the present disclosure.
  • the video playback device includes: an acquisition module 10 , a processing module 20 and a playback module 30 .
  • the user acquires a real-life shot image.
  • the acquisition module may include an image acquisition device configured by the video playback device itself, and acquire the real-scene shot image by capturing the real-scene image in real time.
  • the acquisition module may also acquire the real-life captured images stored or captured from the server or the video playback device itself, which is not limited in the present disclosure.
  • the processing module 20 is configured to detect a target image in the live-action shot image, and determine a display position of the target image in the live-action shot image.
  • the playing module 30 is configured to acquire a target video associated with the target image, and play the target video at the display position of the target image in the real-life shot image.
  • the video playback device further includes: a first interaction module; the first interaction module is configured to determine the triggered information associated with the target video in response to the user's triggering operation on the target video being played; The playing module 20 displays the information.
  • the video playback device further includes: a second interaction module
  • the second interaction module is configured to receive at least one group of target images and target videos to be associated uploaded by the user, and upload the at least one group of target images and target videos to be associated to the server, so that the server can compare the target images and videos to the server.
  • the target image to be associated and the target video are associated and stored.
  • the playback module 30 when acquiring the target video associated with the target image, is specifically configured to: send the target image to the server, and receive the target video associated with the target image returned by the server. .
  • the live-action shot image includes multiple target images; the playback module 30 obtains a target video associated with the target image, and displays the target image in the display position of the live-action shot image.
  • the target video it is specifically used for: according to the acquisition sequence of the target videos associated with the plurality of target images, store the target videos associated with the plurality of target images in a preset video playlist according to the storage order of each target video in the video playlist, determine at least one target video from the video playlist as the target video to be played; in the target image associated with the target video to be played At the display position, the target video to be played is played.
  • the video playlist also stores the acquisition time of the target video; the playback module 30, in accordance with the acquisition sequence of the target videos associated with the plurality of target images, will associate the target images with the plurality of target images.
  • the target video is stored in the preset video playlist, it is specifically used for: storing the target videos in the video playlist according to the reverse order of the acquisition time of each target video.
  • the playback module 30 when playing the target video to be played, is specifically configured to: decode each target video to be played, and obtain audio data and video data of each target video to be played. ; play the video data of each target video to be played; and play the audio data of the target video to be played recently acquired and stored in the video playlist.
  • the playback module 30 when the playback module 30 plays the target video at the display position of the target image in the live-action captured image, it is specifically configured to: determine the real-life captured image according to the display position. The corresponding play area in the play area; perform video preprocessing on the target video according to the play area, and play the target video in the play area.
  • the playback module 30 when the playback module 30 performs video preprocessing on the target video according to the playback area, and plays the target video in the playback area, the playback module 30 is specifically configured to: perform video preprocessing in the target video according to the playback area. According to the spatial features in the real-life shot images, three-dimensional space rendering processing is performed on the video data of the target video, so as to play the target video in the playback area.
  • the display positions include: image edge positions, or/and image vertex positions.
  • the playing module 30 when playing the target video, is specifically configured to: acquire the playing progress of the target video; and play the target video according to the playing progress.
  • the video playback device provided by the embodiment of the present disclosure is configured to perform the following methods: obtaining a real-life captured image; detecting a target image in the real-life captured image; determining a display position of the target image in the real-life captured image; The target video associated with the target image is played, and the target video is played at the display position of the target image in the real-life shooting image.
  • the video playback device provided in this embodiment can reduce the use of augmented reality display technology.
  • the electronic device provided in this embodiment can be used to implement the technical solutions of the foregoing method embodiments, and the implementation principles and technical effects thereof are similar, and details are not described herein again in this embodiment.
  • the electronic device 900 may be a terminal device or a media library.
  • the terminal equipment may include, but is not limited to, such as mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (Personal Digital Assistant, referred to as PDA), tablet computers (Portable Android Device, referred to as PAD), portable multimedia players (Portable Media Player, PMP for short), in-vehicle terminals (such as in-vehicle navigation terminals), mobile terminals such as wearable electronic devices, and stationary terminals such as digital TVs, desktop computers, smart home devices, and the like.
  • PDA Personal Digital Assistant
  • PAD Portable multimedia players
  • PMP Portable Media Player
  • in-vehicle terminals such as in-vehicle navigation terminals
  • mobile terminals such as wearable electronic devices
  • stationary terminals such as digital TVs, desktop computers, smart home devices, and the like.
  • the electronic device shown in FIG. 10 is only an embodiment, and should not impose any limitation on the function and scope of use of the embodiment of the present disclosure
  • the electronic device 900 may include a processor 901 for executing a video playback method (such as a central processing unit, a graphics processor, etc.), which may be stored in a read only memory (Read Only Memory, ROM for short) 902 according to the Various appropriate actions and processes are performed by the program in the storage device 908 or the program loaded into the random access memory (Random Access Memory, RAM for short) 903 from the storage device 908 . In the RAM 903, various programs and data necessary for the operation of the electronic device 900 are also stored.
  • the video playback method 901, the ROM 902, and the RAM 903 are connected to each other through a bus 904.
  • An input/output (I/O) interface 905 is also connected to bus 904 .
  • an input device 906 including, for example, a touch screen, a touch pad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD for short) ), speaker, vibrator, etc. output device 907; storage device 908 including, eg, magnetic tape, hard disk, etc.; and communication device 909.
  • the communication means 909 may allow the electronic device 900 to communicate wirelessly or by wire with other devices to exchange data. While FIG. 10 shows an electronic device 900 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
  • an embodiment of the present disclosure includes a computer program product comprising a computer program carried on a computer-readable medium, the computer program comprising a method for executing the method shown in each flowchart according to the embodiment of the present disclosure code.
  • the computer program may be downloaded and installed from the network via the communication device 909, or from the storage device 908, or from the ROM 902.
  • the computer program is executed by the video playback method 901
  • the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
  • Embodiments of the present disclosure also include a computer program, which, when executed by a processor, is configured to perform the above-mentioned functions defined in the methods of the embodiments of the present disclosure.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable Read Only Memory (EPROM), flash memory, optical fiber, portable Compact Disc-Read Only Memory (CD-ROM), optical storage device, magnetic storage device, or any suitable of the above The combination.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport a program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
  • the aforementioned computer-readable medium carries one or more programs, and when the aforementioned one or more programs are executed by the electronic device, causes the electronic device to execute the methods shown in the foregoing embodiments.
  • Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, but also conventional Procedural programming language - such as the "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or media library.
  • the remote computer can be connected to the user's computer through any kind of network—including a Local Area Network (LAN) or a Wide Area Network (WAN)—or, can be connected to an external A computer (eg using an internet service provider to connect via the internet).
  • LAN Local Area Network
  • WAN Wide Area Network
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure may be implemented in a software manner, and may also be implemented in a hardware manner.
  • the name of the unit does not constitute a limitation of the unit itself under certain circumstances, for example, the first obtaining unit may also be described as "a unit that obtains at least two Internet Protocol addresses".
  • exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (Application Specific Standard Products) Standard Product, ASSP), system on chip (System On Chip, SOC), complex programmable logic device (Complex Programmable Logic Device, CPLD) and so on.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSP Application Specific Standard Products
  • ASOC System On Chip
  • complex programmable logic device Complex Programmable Logic Device, CPLD
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM), flash memory, optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • flash memory optical fiber
  • portable compact disk read only memory (CD-ROM) optical storage devices
  • magnetic storage devices or any suitable combination of the foregoing.
  • a video playback method includes:
  • a target video associated with the target image is acquired, and the target video is played at the display position of the target image in the live-action image.
  • the method further includes:
  • the information is displayed.
  • the method further includes:
  • the acquiring the target video associated with the target image includes:
  • the target video associated with the target image returned by the server is received.
  • the live-action captured image includes multiple target images
  • the obtaining the target video associated with the target image, and playing the target video at the display position of the target image in the live-action image includes:
  • the target videos associated with the plurality of target images are stored in a preset video playlist;
  • each target video in the video playlist determine at least one target video from the video playlist as the target video to be played
  • the target video to be played is played.
  • the acquisition time of the target video is also stored in the video playlist
  • the storing the target videos associated with the plurality of target images in the preset video playlist according to the acquisition sequence of the target videos associated with the plurality of target images including:
  • the target videos are stored in the video playlist according to the reverse order of the acquisition time of each target video.
  • the playing the target video to be played includes:
  • the playing the target video at the display position of the target image in the live-action image includes:
  • performing video preprocessing on the target video according to the play area, and playing the target video in the play area includes:
  • the display positions include: image edge positions, or/and image vertex positions.
  • the playing the target video includes:
  • a video playback device includes: an acquisition module, a processing module, and a playback module;
  • the acquisition module the user acquires the real scene shooting image
  • a processing module configured to detect a target image in the live-action shot image, and determine a display position of the target image in the live-action shot image
  • a playing module is configured to acquire a target video associated with the target image, and play the target video at the display position of the target image in the real-life shot image.
  • the video playback device further includes: a first interaction module
  • the first interaction module is configured to determine the triggered information associated with the target video in response to the user's triggering operation on the target video to be played, so that the information can be displayed by the playing module.
  • the video playback device further includes: a second interaction module
  • the second interaction module is configured to receive at least one group of target images and target videos to be associated uploaded by the user, and upload the at least one group of target images and target videos to be associated to the server, so that the server can compare the target images and videos to the server.
  • the target image to be associated and the target video are associated and stored.
  • the playback module when acquiring the target video associated with the target image, is specifically configured to: send the target image to the server, and receive the target video associated with the target image returned by the server.
  • the live-action shot image includes multiple target images; the playback module obtains a target video associated with the target image, and displays the target image at the display position of the live-action shot image.
  • the target video it is specifically used to: store the target video associated with the plurality of target images in a preset video playlist according to the acquisition sequence of the target video associated with the plurality of target images ; According to the storage order of each target video in the video playlist, determine at least one target video from the video playlist as the target video to be played; In the display of the target image associated with the target video to be played At the position, the target video to be played is played.
  • the acquisition time of the target video is also stored in the video playlist; the playback module, according to the acquisition order of the target videos associated with the plurality of target images, will When the video is stored in the preset video playlist, it is specifically used for: storing the target videos in the video playlist according to the reverse order of the acquisition time of each target video.
  • the playing module when playing the target video to be played, is specifically configured to: perform decoding processing on each target video to be played, and obtain audio data and video data of each target video to be played; Playing the video data of each target video to be played; and playing the audio data of the target video to be played recently acquired and stored in the video playlist.
  • the playing module when the playing module plays the target video at the display position of the target image in the live-action captured image, it is specifically configured to: determine, according to the display position, what is in the live-action captured image. A corresponding play area; video preprocessing is performed on the target video according to the play area, and the target video is played in the play area.
  • the playback module when the playback module performs video preprocessing on the target video according to the playback area, and plays the target video in the playback area, the playback module is specifically configured to: perform video preprocessing on the target video according to the playback area. According to the spatial features in the real-life shot images, three-dimensional spatial rendering processing is performed on the video data of the target video, so as to play the target video in the playback area.
  • the display positions include: image edge positions, or/and image vertex positions.
  • the playback module when playing the target video, is specifically configured to: acquire the playback progress of the target video; and play the target video according to the playback progress.
  • an electronic device includes: at least one processor and a memory;
  • the memory stores computer-executable instructions
  • the at least one processor executes the computer-executable instructions stored in the memory, so that the at least one processor executes the video playback method as described in any preceding item.
  • a computer-readable storage medium stores computer-executable instructions, and when a processor executes the computer-executable instructions, the The video playback method described in any preceding item.
  • a computer program product includes a computer program that, when executed by a processor, implements the video playback method described in any preceding item.
  • a computer program when executed by a processor, is used to implement the video playback method according to any one of the preceding items.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Television Signal Processing For Recording (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本公开实施例提供的视频播放方法、装置、电子设备、存储介质、计算机程序产品及计算机程序,通过获得实景拍摄图像,检测并确定目标图像在所述实景拍摄图像中的显示位置,获得与所述目标图像相关联的目标视频,并在所述实景拍摄图像中的显示位置处播放所述目标视频,能够降低在利用增强现实显示技术对信息进行呈现时的制备周期及成本,另一方面,也为用户提供了更多的呈现渠道以进行视频信息的呈现,使用户得到更好的交互体验和视觉体验。

Description

视频播放方法、装置、电子设备及存储介质
本申请要求于2020年10月28日提交的申请号为202011173352.9、名称为“视频播放方法、装置、电子设备及存储介质”的中国专利申请的优先权,此申请的内容通过引用并入本文。
技术领域
本公开实施例涉及计算机领域,尤其涉及一种视频播放方法、装置、电子设备、存储介质、计算机程序产品及计算机程序。
背景技术
增强现实(Augmented Reality,简称AR)技术是一种将虚拟信息与真实世界巧妙融合的技术。
利用增强现实进行信息呈现成为一种可能的信息呈现方式。在现有技术中,通过预先构建待呈现的虚拟角色或虚拟场景的三维建模图像,以使用户在利用终端进行实景的拍摄时,可同步获取到包括有三维建模图像的实景拍摄图像。
但是,这样信息呈现方式需要提前构建大量的三维建模图像,其构建成本较高,构建周期较长,不利于信息的快速呈现。
发明内容
针对上述问题,本公开实施例提供了一种视频播放方法、装置、电子设备、存储介质、计算机程序产品及计算机程序。
第一方面,本公开实施例提供一种视频播放方法,包括:
获得实景拍摄图像;
检测所述实景拍摄图像中的目标图像;
确定所述目标图像在所述实景拍摄图像中的显示位置;
获取与所述目标图像相关联的目标视频,并在所述目标图像在所述实景拍摄图像中的显示位置处播放所述目标视频。
第二方面,本公开实施例提供一种视频播放装置,包括:
处理模块,用于获得实景拍摄图像,检测所述实景拍摄图像中的目标图像,确定所述目标图像在所述实景拍摄图像中的显示位置;
播放模块,用于获取与所述目标图像相关联的目标视频,并在所述目标图像在实景拍摄图像中的显示位置处播放所述目标视频。
第三方面,本公开实施例提供一种电子设备,包括:至少一个处理器和存储器;
所述存储器存储计算机执行指令;
所述至少一个处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处理器执行如上第一方面以及第一方面各种可能的设计所述的视频播放方法。
第四方面,本公开实施例提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如上第一方面以及第一方面各种可能的设计所述的视频播放方法。
第五方面,本公开实施例提供一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时,实现如上第一方面以及第一方面各种可能的设计所述的视频播放方法。
第六方面,本公开实施例提供一种计算机程序,所述计算机程序被处理器执行时,用于实现如上第一方面以及第一方面各种可能的设计所述的视频播放方法。
本公开实施例提供一种视频播放方法、装置、电子设备、存储介质、计算机程序产品及计算机程序,该方法包括:获得实景拍摄图像,检测所述实景拍摄图像中的目标图像;确定所述目标图像在所述实景拍摄图像中的显示位置;获取与所述目标图像相关联的目标视频,并在所述目标图像在所述实景拍摄图像中的显示位置处播放所述目标视频。本实施例提供的视频播放方法,能够降低在利用增强现实显示技术对信息进行呈现时的呈现成本以及制备周期,另一方面,也为用户提供了更多的呈现渠道以进行视频信息的呈现,使用户得到更好的交互体验和视觉体验。
附图说明
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例所基于的一种网络架构的示意图;
图2为本公开实施例提供的一种视频播放方法的流程示意图;
图3为本公开实施例提供的一种视频播放方法的第一界面示意图;
图4为本公开实施例提供的一种视频播放方法的信令交互图;
图5为本公开实施例提供的一种视频播放方法的第二界面示意图;
图6为本公开实施例提供的一种视频播放方法的第三界面示意图;
图7为本公开实施例提供的另一种视频播放方法的流程示意图;
图8为本公开实施例提供的一种视频播放方法的第四界面示意图;
图9为本公开实施例提供的视频播放装置的结构框图;
图10为本公开实施例提供的电子设备的硬件结构示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
增强现实(Augmented Reality,简称AR)技术是一种将虚拟信息与真实世界巧妙融合的技术。
在进行增强现实的显示时,终端将会先对现实场景的实景进行拍摄,以获得当前的实景拍摄图像。然后,利用增强现实技术对实景拍摄图像进行处理,以将预设的虚拟信息叠加在实景拍摄图像上,并将叠加后画面呈现给用户。
一般来说,叠加在实景拍摄图像的虚拟信息一般为预先建立的虚拟人物、虚拟场景的三维建模图像。但是,三维建模图像的构建流程相对复杂,构建时所需要的技术成本和人工成本较高。这将使得利用增强现实显示技术进行信息呈现时,对于每一条信息进行呈现所需要的周期较长且成本较高。
针对这样的问题,发明人通过研究后,创造性地发现,信息呈现方式不仅局限于三维建模图像,还可以换成技术成本和人工成本较低的方式。根据本公开实施例的方法将现有的一些视频数据置入实景拍摄图像中,并通过增强现实显示的方式进行显示。通过这样的方式,能够降低在利用增强现实显示技术对信息进行呈现时的制备周期及成本,另一方面,也为用户提供了更多的呈现渠道以进行视频信息的呈现,使用户得到更好的交互体验和视觉体验。
参考图1,图1为本公开实施例所基于的一种网络架构的示意图,该图1所示网络架构具体可包括终端1以及服务器2。
其中,终端1具体可为用户手机、智能家居设备、平板电脑、可穿戴设备等可用于拍摄实景并且展现拍摄的实景的硬件设备,其终端1内可集成或安装有视频播放装置,该视频播放装置为用于执行本公开视频播放方法的硬件或软件,该视频播放装置可为终端1提供增强现实显示的展示页面,并且,终端1利用其屏幕或显示组件向用户显示视频播放装置所提供的增强现实显示的展示页面。
服务器2可具体为设置在云端的服务器或者服务器集群,其服务器或服务器集群中可存储有与本公开提供的视频播放方法相关的视频数据、以及图像数据等。
具体的,在执行本公开提供的视频播放方法时,视频播放装置还可利用终端1的网络组件与服务器2进行交互,获取服务器2中存储的图像数据和视频数据,并进行相应的处理和展示。
图1所示架构可适用于信息呈现领域,换句话说,其可用于在各类场景下的信息呈现。
举例来说,本公开提供的视频播放方法可应用于基于增强现实显示的游戏场景中,例如,在一些基于增强现实显示技术的“寻宝”游戏中,可通过本公开提供的视频播放方法,实现对于“寻宝”过程中“线索”视频的推送和呈现。
例如,本公开提供的视频播放方法可应用于基于增强现实显示的广告场景中,例如,对于一些商品或者产品,可通过本公开提供的视频播放方法,实现对于这些商品的相关视频的呈现,从而向用户提供更多的关于该商品的信息,提升用户的体验。
又例如,在一些基于增强现实显示技术的地标建筑、博物馆等进行“云解说”的场景中,对于需要进行“云解说”的“地标”、“建筑物”、“博物馆馆藏”、“历史图片”等的图像,通过本公开提供的视频播放方法播放与该“地标”、“建筑物”、“博物馆馆藏”、“历史图片”相关联的视频,从而向用户提供更好的服务。
此外,在一些使用到拍摄功能的日常生活场景中,也可利用本公开提供的视频播放方法进行视频信息的播放,从而为用户呈现关于该场景更多的信息,增加用户的交互体验
进一步举例来说,在进行“扫码支付”、“拍摄图片”等需要开启终端摄像头进行实景拍摄的场景中,本公开提供的视频播放方法可一并被执行,以在各日常生活场景中完成对于信息的呈现。
下面将针对本公开提供的视频播放方法进行进一步说明:
图2为本公开实施例提供的一种视频播放方法的流程示意图。参考图2,本公开实施例提供的视频播放方法,包括:
步骤101、获得实景拍摄图像,并检测所述实景拍摄图像中的目标图像;
步骤102、确定所述目标图像在所述实景拍摄图像中的显示位置;
步骤103、获取与所述目标图像相关联的目标视频,并在所述目标图像在所述实景拍摄图像中的显示位置处播放所述目标视频。
需要说明的是,本实施例的提供的处理方法的执行主体为前述的视频播放装置,在本公开的一些实施例中,其具体指代可安装或集成在终端上的客户端或展示端。用户可通过终端,对视频播放装置进行操作,以使视频播放装置可对用户触发的操作进行响应。
图3为本公开实施例提供的一种视频播放方法的第一界面示意图。
首先,如图3所示的,视频播放装置将获得一实景拍摄图像,该实景拍摄图像可为终端调用自身的拍摄组件对当前环境进行拍摄得到的图像,也可为视频播放装置通过其他途径获取的实景的实时图像。
随后,视频播放装置将在实景拍摄图像中进行图像识别,以确定该实景拍摄图像中是否存在有可用于进行视频播放的目标图像。而可以理解的是,视频播放装置对于实景拍摄图像中的目标图像的识别可通过图像识别技术实现。在一个实施例中,目标图像可以为二维的平面图像,对应的显示位置可以为该二维平面图形所在的位置。在一个实施例中,目标图像也可以为三维物体的图像,对应的显示位置可以为该三维物体在二维平面上进行投影的投影位置等等。根据本公开实施例的图像识别技术可基于二维图像识别技术实现,即,通过利用图像识别技术可对于包括预设的平面图片、以及三维物体的投影表面,以及产生一定形变的平面图片或平面图片进行图像识别。此外,对于目标图像包括三维物体的图像时,根据本公开的实施例可以采用基于物体识别技术实现。本公开不对具体的图像识别技术进行限定。
通过对实景拍摄图像进行图像识别,视频播放装置可在实景拍摄图像中检测出目标图像在实景拍摄图像中的显示位置。如图3所示的,目标图像可以包括如图3中所述的图像框中的图像,相应地,显示位置可利用图像框301的方式进行框出,该图像框301用于表示目标视频的播放区域。其中,目标图像的显示位置具体可通过目标图像的图像位置来确定,而图像位置可包括但不限于,目标图像的图像边缘位置,以及图像顶点位置等等。
之后,视频播放装置可以确定与所述目标图像相关联的目标视频,并在所述实景拍摄图像中目标图像所在的显示位置处播放所述目标视频。
在上述过程中,为了能够在图像的显示位置处播放视频,还需要根据图像的显示位置对目标视频进行基于三维空间的渲染的处理,以使目标视频的视频画面能够贴合显示位置,并进行播放。
具体实现上,可先根据显示位置,确定所述实景拍摄图像中相应的播放区域,然后,根据播放区域对所述目标视频进行视频预处理,并在所述播放区域中播放所述目标视频。
可以理解的是,上述的显示位置是指目标图像在实景拍摄图像中的图像所占据的范围,其可包括图像边缘位置,或/和,图像顶点位置。
因此,在确定播放区域时,则可根据目标图像的图像边缘位置,或/和,图像顶点位置,在实景拍摄图像中划分出相应的图像框,以作为播放区域(如图3所示的301)。
然后,可根据所述播放区域在所述实景拍摄图像中的空间特征,对所述目标视频的视频数据进行三维空间渲染处理,以在所述播放区域(如图3所示的301)中播放所述目标视频(如图3所示的302)。
其中,空间特征用于表示播放区域在实景拍摄图像的三维空间上空间位置属性,如播放区域顶点的空间位置坐标、播放区域边缘的空间位置坐标、播放区域所在平面与实景拍摄图像的拍摄面之间的空间角度信息等等。
结合播放区域的空间特征,可对于目标视频的视频数据进行三维空间渲染处理,以使得渲染后的目标视频的视频画面能够与播放区域在三维空间贴合。其中,可选的,三维空间渲染处理可包括像素的坐标映射处理,即将目标视频的视频画面中的各像素的二维像素坐标,采用空间映射的方式映射至播放区域的三维坐标上。当然,该三维空间渲染处理还可采用现有的其他方式实现,本申请对此不进行限制。
最终,实现所述实景拍摄图像中的显示位置处播放所述目标视频的效果。
为了便于快速对于目标视频和目标图像之间的关联关系进行部署,也便于快速获取与目标图像相关联的目标视频,在可选实施方式中,目标视频与目标图像之间的关联关系进而预先建立并存储在前述的服务器中。
图4为本公开实施例提供的一种视频播放方法的信令交互图。其中,目标图像和目标视频之间的关联关系是预先构建在服务器中的,该关联关系的预建方式可参见图4:
首先,终端可通过各种渠道获得图像和视频,然后通过界面(如图5所示界面)将想要关联的目标图像和目标视频上传至服务器。其中,对于该图片和视频的获取方式,可为终端拍摄获取的,也可为终端通过网络下载,或通过近场传输的方式从其他终端获取,本公开对此不进行限制。
随后,服务器在接收待关联的目标图像和目标视频之后,将对其二者进行关联存储,以确定其二者的关联关系。具体来说,服务器中可预存有关联关系的存储列表,以用于存储不同终端上传的各待关联的图像和视频之间的关联关系,其具体存储方式将在下面实施例进行描述,本处对此不进行限制。
再后,当终端启动摄像头之后,将按照前述的方式执行视频播放方法,即终端将对实景进行拍摄,获得相应的实景拍摄图像,然后,对实景拍摄图像进行图像识别,以得到实景拍摄图像中的目标图像;再后,确定目标图像的显示位置并将该目标图像发送至服务器。
此时,服务器会根据存储列表中预先建立的关联关系,确定与该目标图像对应的目标视频,并将该目标视频发送至终端。最后,在终端接收到目标视频后,终端将会在实景拍摄图像中的目标图像的显示位置处对目标视频进行播放。
需要说明的是,尽管图4示出进行上传待关联目标图像和目标视频的终端与使用本方案进行视频播放的终端为同一终端,但在其他示例或实际使用中,该上传待关联目标图像和目标视频的终端与使用本方案进行视频播放的终端也可为不同终端。即,本申请不对上传待关 联目标图像和目标视频的终端与使用本方案进行视频播放的终端是否为同一终端进行任何限制,本领域技术人员可结合实际场景自行确定。
特别来说,由于在本公开提供的技术方案中,可直接对与目标图像相关联的目标视频进行播放,而不必再进行三维虚拟建模的建模处理,因此,对于需要呈现信息的信息所有者来说,其可通过如下操作实现对于目标图像和目标视频的快速关联,以使更多的用户获取到信息所有者想要呈现的信息,其成本和制备难度大大降低。
具体来说,图5为本公开实施例提供的一种视频播放方法的第二界面示意图。如图5所示,终端中存储有若干图像和视频的素材;用户(信息所有者)可通过操作,选中希望关联的目标图像(如图5所示的图像2)和目标视频(如图5所示的视频3)。也就是说,首先,视频播放装置将响应用户(信息所有者)触发的上传操作,确定待关联的目标图像和目标视频。然后,视频播放装置会将待关联的目标图像和目标视频上传至服务器,以供所述服务器对待关联的目标图像和目标视频进行关联存储。
其中,关联存储的方式可为多种,例如,可为图像以及视频分别设置标识ID,通过将两个标识ID对应存储的方式将待关联的目标图像和目标视频进行存储。也就是说,终端将目标图像发送至服务器的具体实现方式,可为终端将目标图像的标识ID发送至服务器,以供服务器根据目标图像的标识ID从预存的大量图像中找到相应标识ID的目标图像,并发送相应的目标视频。
当然,在其他的关联存储方式中,还可采用对待关联的目标图像和目标视频进行对称密钥加解密的方式进行存储,如,在存储时,可对目标图像进行处理,以得到唯一密钥,并利用该密钥将与目标图像关联的目标视频进行加密存储;在后续使用时,可对目标图像再次进行处理,以得到前述的唯一密钥,利用该唯一密钥能够从存储的若干视频中找其能够解密的视频,该视频将为目标视频。
在完成上述对目标图像和目标视频的配置后,用户(信息接收者)将可通过前述实施例所提供的方法,在终端上观看到与目标图像(图像2)相关联的目标视频(视频3)。
在使用时,终端将检测得到的目标图像发送至服务器后,将接收服务器返回的与所述目标图像相关联的目标视频。其中,特别说明的是,对于终端来说,其可将目标图像的图像数据发送给服务器,也可对目标图像的图像数据进行分析处理,以获得目标图像的图像标识ID和/或目标图像的图像特征,并将其发送给服务器。而无论是图像数据还是图像标识ID还是图像特征,服务器均可根据预先建立的关联关系,确定相应的目标视频,并将相应的目标视频发送给终端进行显示。
需要说明的是,在接收用户上传的待关联目标图像和目标视频时,用户可同时上传至少一组的待关联目标图像和目标视频。其中,不同组的待关联目标图像和目标视频可采用不同的标识进行关联标记,以便服务器对不同组的待关联目标图像和目标视频进行关联存储。这样的方式,能够大大提高对于待关联目标图像和目标视频的配置效率。通过上传多组关联目标图像和目标视频,可以实现针对多个目标图像的视频信息呈现,能够进一步增加用户的交互体验。
可以理解的是,上传待关联目标图像和目标视频的用户与使用本方案进行视频播放的用户可以相同也可以不同。以前述实际应用场景为例,通过本公开提供的视频播放方法对于一些商品或者产品进行相关视频的呈现的场景中,上传待关联目标图像和目标视频的用户具体 可为产品推广者,或,商品视频的推广者;而使用本方案进行视频播放的用户具体可为产品使用者,商品视频接收者或观看者等。
此外,图6为本公开实施例提供的一种视频播放方法的第三界面示意图。在其他可选实施例中,目标图像可以包括实景拍摄图像中的三维物体的图像。通过识别该三维物体的图像,可以将该三维物体的表面投影形成的二维平面作为该目标图像的显示位置,并在该显示位置处显示与目标图像关联的视频。如图6所示,以图6中的花瓶为目标图像为例进行说明,与该花瓶相关联的视频将在该花瓶的投影平面处进行播放。
此外,在一个实施例中,在对目标视频进行持续播放的过程中,视频播放装置将持续对目标图像的显示位置进行追踪,以便能够根据目标图像的显示位置的变化相应地调整目标视频的播放位置,从而在目标图像的实时的显示位置进行目标视频播放。因此,如图6所示的,以目标图像为图6中的花瓶为例进行说明。当实景拍摄图像的拍摄角度发生变化时(例如,拍摄花瓶的角度发生变化),其识别出的目标图像的显示位置也将相应发生变化,此时,视频播放装置将实时调整图像框的位置和尺寸,并将目标视频显示在调整后的图像框(播放区域)中。如图6所示方案中,在起始阶段,采用垂直角度拍摄目标图像花瓶(三维物体)的侧表面(投影表面),此时,可识别该目标图像花瓶的显示位置,并在该显示位置处显示相应的目标视频,随后,旋转相机,以对该拍摄角度进行调整,此时,可识别出该目标图像花瓶(三维物体)的侧表面(投影表面)的显示位置发生一定调整,并在该调整后的显示位置处显示相应的目标视频。
此外,在其他可选实施方式中,为了向用户提供更多的信息,用户可对目标视频进行触发操作,视频播放装置的展示界面将进行切换,以使与目标视频相关联的信息被展示。即,视频播放装置将响应用户对播放的目标视频的触发操作,确定被触发的与目标视频相关联的信息,对所述信息进行展示。
其中,与目标视频相关联的信息可为网页信息,也可为其他图像信息、其他应用程序的程序信息等等。例如,目标图像为某产品的图像,目标视频展示了某品牌产品的产品使用方法,则该相关联的信息则可为该品牌产品的网页介绍,或该品牌产品在某应用网上商城中的购买链接等;又例如,目标图像为某景点的图像,目标视频为某景点的宣传视频,该相关联的信息则可为该景点的更多景点图片,或景点在官方网站上的网页介绍,还可以是该景点的门票购买信息等等。
此外,在其他可选实施方式中,该视频播放装置还支持对于目标视频的断点播放,即当实景拍摄图像中丢失某一目标视频对应的目标图像的显示位置(例如,此时目标视频播放到00’:30”),随后又在预设时间段内找回该目标图像的显示位置时,视频播放装置可从该目标视频上一次播放进度(即00’:30”),继续对该目标视频进行播放(从00’:30”继续播放)。
也就是说,视频播放装置在播放所述目标视频时,还将获取所述目标视频的播放进度;根据所述播放进度播放所述目标视频。
可选实施方式中,视频播放装置中可存储有目标视频的播放进度,如当前播放时刻信息等。而在每一次播放目标视频时,视频播放装置可先提取目标视频的当前播放时刻信息等播放进度,然后,确定本次播放的起始时刻,最后从该起始时刻进行目标视频的播放。通过这样的方式,实现对于目标视频的断点播放功能,进一步提高用户的视听体验。
在上述各实施方式的基础上,本公开提供的视频播放方法中对于实景拍摄图像进行识别获得的目标图像的数量可为多个。也就是说,视频播放装置在对于实景拍摄图像进行图像识别时,可识别得到图像中的多个目标图像,并对于该多个目标图像的目标视频同时进行播放。
图7为本公开实施例提供的另一种视频播放方法的流程示意图,如图7所示的,该方法包括:
步骤201、获得实景拍摄图像;
步骤202、确定每个目标图像在所述实景拍摄图像中的显示位置;
步骤203、按照与所述多个目标图像相关联的目标视频的获取顺序,将与所述多个目标图像相关联的目标视频存储在预设的视频播放列表中;
步骤204、根据所述视频播放列表中各目标视频的存储顺序,从所述视频播放列表中确定至少一个目标视频作为待播放的目标视频;
步骤205、在与所述待播放的目标视频相关联的目标图像的显示位置处,播放所述待播放的目标视频。
需要说明的是,本实施例提供的处理方法的执行主体为前述的视频播放装置,在本公开的一些实施例中,其具体指代可安装或集成在终端上的客户端或展示端。用户可通过终端,对视频播放装置进行操作,以使视频播放装置可对用户触发的操作进行响应。
与前述实施例不同的是,在本实施例中,实景拍摄图像中将包括有多个目标图像,视频播放装置将分别确定各目标图像在实景拍摄图像中的显示位置,并将每个目标图像相关联的目标视频在其相应的显示位置上进行播放。其中,针对每一目标视频在在其相应的显示位置上进行播放的播放方式可参见前述方式,本实施方式对此不再进行赘述。
在上述过程中,目标视频的数量可能较多,若同时播放较多数量的目标视频可能会造成终端的卡顿。因此,为了能够得到更好的视频播放效果,也为了能给用户带来更好的视觉体验,视频播放装置中还中可设置一视频播放列表,以用于设置同时在实景拍摄图像中进行播放的目标视频的数量。
具体来说,图8为本公开实施例提供的一种视频播放方法的第四界面示意图。如图8所示的,首先,视频播放装置获得一实景拍摄图像,然后对实景拍摄图像中的多个目标图像进行图像识别,并利用多个图像框将各个目标图像在实景拍摄图像中的显示位置依次进行框出,得到若干图像框(播放区域)。如图8所示,目标图像A对应的播放区域为801,目标图像B对应的播放区域为802,目标图像C对应的播放区域为803。
然后,视频播放装置会分别将各目标图像发送至服务器,以供服务器根据预设的关联关系确定每个目标图像对应的目标视频,并将各目标视频返回至视频播放装置中。
当视频播放装置接收到这些目标视频后,会将这些目标视频存储在视频播放列表中。在播放目标视频时,视频播放装置将按照各目标视频存入视频播放列表的存储顺序,从中选择一个或多个目标视频以作为待播放的目标视频,并将每个目标视频在与其相关联的目标图像的播放区域中进行播放,例如,与目标图像A相关联的目标视频1在对应的播放区域801中进行播放,与目标图像B相关联的目标视频2在对应的播放区域802中进行播放,与目标图像C相关联的目标视频3在对应的播放区域803中进行播放。
需要说明的是,视频播放列表中存储有从获得实景拍摄图像的那一时刻起从服务器获得的全部目标视频,视频播放列表还可确定列表的清理周期,并按照清理周期对列表内存储的目标视频进行清理。
此外,视频播放列表中还存储有目标视频的获取时间,在各目标视频存储至视频播放列表时,按照各目标视频的获取时间的倒序,将所述各目标视频存储在所述视频播放列表中。换句话说,越是靠近当前时刻获取的目标视频,越是存储在视频播放列表的顶部,并优先被播放。
在其他可选实施方式中,如果多个视频中都包括音频,则播放多个目标视频的同时播放多个目标视频的音频将使得用户的视觉体验和信息获取体验较差,为了避免该问题,在本实施方式中,可采用音频视频分别播放的方式,例如,在同时播放多个待播放的目标视频的视频画面的同时,仅播放唯一一个正在播放的目标视频的音频数据。
具体来说,在播放目标视频时,可先对各待播放的目标视频进行解码处理,获得各待播放的目标视频的音频数据和视频数据;播放各待播放的目标视频的视频数据;以及播放最近获取并存储在所述视频播放列表中的待播放的目标视频的音频数据。
其中,对于当前播放的音频数据对应的目标视频来说,其音频数据和视频数据将采用音视频同步技术进行同步处理,以保证播放时的音视频同步。通过这样的方式,能够使得用户在同一时刻内,仅接收到来自同一目标视频的声音,而观看到多个目标视频的画面,在能浏览到更多视频信息的情况下,保证视频信息的视听体验不受影响。
本公开实施例提供的视频播放方法包括:获得实景拍摄图像;检测所述实景拍摄图像中的目标图像;确定所述目标图像在所述实景拍摄图像中的显示位置;获取与所述目标图像相关联的目标视频,并在所述目标图像在所述实景拍摄图像中的显示位置处播放所述目标视频。本实施例提供的视频播放方法,能够降低在利用增强现实显示技术对信息进行呈现时的制备周期以及成本,另一方面,也为用户提供了更多的呈现渠道以进行视频信息的播放,使用户得到更好的交互体验和视觉体验。
对应于上文实施例的视频播放方法,图9为本公开实施例提供的视频播放装置的结构框图。为了便于说明,仅示出了与本公开实施例相关的部分。参照图9,所述视频播放装置包括:获取模块10、处理模块20和播放模块30。
获取模块10,用户获得实景拍摄图像。在一个实施例中,获取模块可以包括视频播放装置自身配置的图像获取装置,通过实时捕获实景图像获取实景拍摄图像。此外获取模块还可以从服务器或者视频播放装置自身获取其存储或者实施拍摄的实景拍摄图像,本公开不对此进行限制。
处理模块20,用于检测所述实景拍摄图像中的目标图像,确定所述目标图像在所述实景拍摄图像中的显示位置。
播放模块30,用于获取与所述目标图像相关联的目标视频,并在所述目标图像在实景拍摄图像中的显示位置处播放所述目标视频。
可选实施例中,该视频播放装置还包括:第一交互模块;该第一交互模块用于响应用户对播放的目标视频的触发操作,确定被触发的与目标视频相关联的信息;以供所述播放模块20对所述信息进行展示。
可选实施例中,该视频播放装置还包括:第二交互模块;
所述第二交互模块用于接收用户上传的待关联的至少一组目标图像和目标视频,将所述待关联的至少一组目标图像和目标视频上传至服务器,以由所述服务器对所述待关联的目标图像和目标视频进行关联存储。
可选实施例中,播放模块30在获取与所述目标图像相关联的目标视频时,具体用于:将所述目标图像发送至服务器,接收服务器返回的与所述目标图像相关联的目标视频。
可选实施例中,实景拍摄图像中包括多个目标图像;所述播放模块30在获得与所述目标图像相关联的目标视频,并在所述目标图像在所述实景拍摄图像中的显示位置处播放所述目标视频时,具体用于:按照与所述多个目标图像相关联的目标视频的获取顺序,将与所述多个目标图像相关联的目标视频存储在预设的视频播放列表中;根据所述视频播放列表中各目标视频的存储顺序,从所述视频播放列表中确定至少一个目标视频作为待播放的目标视频;在与所述待播放的目标视频相关联的目标图像的显示位置处,播放所述待播放的目标视频。
可选实施例中,视频播放列表中还存储有目标视频的获取时间;播放模块30在按照与所述多个目标图像相关联的目标视频的获取顺序,将与所述多个目标图像相关联的目标视频存储在预设的视频播放列表中时,具体用于:按照各目标视频的获取时间的倒序,将所述各目标视频存储在所述视频播放列表中。
可选实施例中,所述播放模块30在播放所述待播放的目标视频时,具体用于:对各待播放的目标视频进行解码处理,获得各待播放的目标视频的音频数据和视频数据;播放各待播放的目标视频的视频数据;以及,播放最近获取并存储在所述视频播放列表中的待播放的目标视频的音频数据。
可选实施例中,所述播放模块30在所述目标图像在所述实景拍摄图像中的显示位置处播放所述目标视频时,具体用于:根据所述显示位置,确定所述实景拍摄图像中相应的播放区域;根据播放区域对所述目标视频进行视频预处理,并在所述播放区域中播放所述目标视频。
可选实施例中,所述播放模块30在根据播放区域对所述目标视频进行视频预处理,并在所述播放区域中播放所述目标视频时,具体用于:根据所述播放区域在所述实景拍摄图像中的空间特征,对所述目标视频的视频数据进行三维空间渲染处理,以在所述播放区域中播放所述目标视频。
可选实施例中,所述显示位置包括:图像边缘位置,或/和,图像顶点位置。
可选实施例中,所述播放模块30在播放所述目标视频时,具体用于:获取所述目标视频的播放进度;根据所述播放进度播放所述目标视频。
本公开实施例提供的视频播放装置用于执行以下方法:获得实景拍摄图像;检测所述实景拍摄图像中的目标图像;确定所述目标图像在所述实景拍摄图像中的显示位置;获取与所述目标图像相关联的目标视频,并在所述目标图像在所述实景拍摄图像中的显示位置处播放所述目标视频,本实施例提供的视频播放装置,能够降低在利用增强现实显示技术对信息进行呈现时的制备周期以及成本,另一方面,也为用户提供了更多的持续渠道以进行视频信息的持续,使用户得到更好的交互体验和视觉体验。
本实施例提供的电子设备,可用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,本实施例此处不再赘述。
参考图10,其示出了适于用来实现本公开实施例的电子设备900的结构示意图,该电子设备900可以为终端设备或媒体库。其中,终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,简称PDA)、平板电脑(Portable Android Device,简称PAD)、便携式多媒体播放器(Portable Media Player,简称PMP)、车载终端(例如车载导航终端)、可穿戴电子设备等等的移动终端以及诸如数字TV、台式计算机、智能家居设备等等的固定终端。图10示出的电子设备仅仅是一个实施例,不应对本公开实施例的功能和使用范围带来任何限制。
如图10所示,电子设备900可以包括用于执行视频播放方法(例如中央处理器、图形处理器等)的处理器901,其可以根据存储在只读存储器(Read Only Memory,简称ROM)902中的程序或者从存储装置908加载到随机访问存储器(Random Access Memory,简称RAM)903中的程序而执行各种适当的动作和处理。在RAM 903中,还存储有电子设备900操作所需的各种程序和数据。视频播放方法901、ROM 902以及RAM 903通过总线904彼此相连。输入/输出(I/O)接口905也连接至总线904。
通常,以下装置可以连接至I/O接口905:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置906;包括例如液晶屏幕(Liquid Crystal Display,简称LCD)、扬声器、振动器等的输出装置907;包括例如磁带、硬盘等的存储装置908;以及通信装置909。通信装置909可以允许电子设备900与其他设备进行无线或有线通信以交换数据。虽然图10示出了具有各种装置的电子设备900,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行根据本公开实施例所述的各流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置909从网络上被下载和安装,或者从存储装置908被安装,或者从ROM 902被安装。在该计算机程序被视频播放方法901执行时,执行本公开实施例的方法中限定的上述功能。本公开的实施例还包括一种计算机程序,所述计算机程序被处理器执行时,用于执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(Electrically Programmable Read Only Memory,EPROM)、闪存、光纤、便携式紧凑磁盘只读存储器(Compact Disc-Read Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、 装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备执行上述实施例所示的方法。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或媒体库上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(Local Area Network,简称LAN)或广域网(Wide Area Network,简称WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定,例如,第一获取单元还可以被描述为“获取至少两个网际协议地址的单元”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field Programmable Gate Array,FPGA)、专用集成电路(Application Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Product,ASSP)、片上系统(System On Chip,SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体实施例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)、快闪存储器、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。以下是本公开的一些实施例。
第一方面,根据本公开的一个或多个实施例,一种视频播放方法,包括:
获得实景拍摄图像;
检测所述实景拍摄图像中的目标图像;
确定所述目标图像在所述实景拍摄图像中的显示位置;
获取与所述目标图像相关联的目标视频,并在所述目标图像在所述实景拍摄图像中的显示位置处播放所述目标视频。
可选实施例中,该方法还包括:
响应用户对播放的目标视频的触发操作,确定被触发的与所述目标视频相关联的信息;
对所述信息进行展示。
可选实施例中,该方法还包括:
接收用户上传的待关联的至少一组目标图像和目标视频;
将所述待关联的至少一组目标图像和目标视频上传至服务器,以由所述服务器对所述待关联的目标图像和目标视频进行关联存储。
可选实施例中,所述获取与所述目标图像相关联的目标视频,包括:
将所述目标图像发送至服务器;
接收服务器返回的与所述目标图像相关联的目标视频。
可选实施例中,所述实景拍摄图像中包括多个目标图像;
所述获得与所述目标图像相关联的目标视频,并在所述目标图像在所述实景拍摄图像中的显示位置处播放所述目标视频,包括:
按照与所述多个目标图像相关联的目标视频的获取顺序,将与所述多个目标图像相关联的目标视频存储在预设的视频播放列表中;
根据所述视频播放列表中各目标视频的存储顺序,从所述视频播放列表中确定至少一个目标视频作为待播放的目标视频;
在与所述待播放的目标视频相关联的目标图像的显示位置处,播放所述待播放的目标视频。
可选实施例中,所述视频播放列表中还存储有目标视频的获取时间;
所述按照与所述多个目标图像相关联的目标视频的获取顺序,将与所述多个目标图像相关联的目标视频存储在预设的视频播放列表中,包括:
按照各目标视频的获取时间的倒序,将所述各目标视频存储在所述视频播放列表中。
可选实施例中,所述播放所述待播放的目标视频,包括:
对各待播放的目标视频进行解码处理,获得各待播放的目标视频的音频数据和视频数据;
播放各待播放的目标视频的视频数据;以及
播放最近获取并存储在所述视频播放列表中的待播放的目标视频的音频数据。
可选实施例中,所述在所述目标图像在所述实景拍摄图像中的显示位置处播放所述目标视频,包括:
根据所述显示位置,确定所述实景拍摄图像中相应的播放区域;
根据播放区域对所述目标视频进行视频预处理,并在所述播放区域中播放所述目标视频。
可选实施例中,所述根据播放区域对所述目标视频进行视频预处理,并在所述播放区域中播放所述目标视频,包括:
根据所述播放区域在所述实景拍摄图像中的空间特征,对所述目标视频的视频数据进行三维空间渲染处理,以在所述播放区域中播放所述目标视频。
可选实施例中,所述显示位置包括:图像边缘位置,或/和,图像顶点位置。
可选实施例中,所述播放所述目标视频,包括:
获取所述目标视频的播放进度;
根据所述播放进度播放所述目标视频。
第二方面,根据本公开的一个或多个实施例,一种视频播放装置,包括:获取模块、处理模块以及播放模块;
获取模块,用户获取实景拍摄图像;
处理模块,用于检测所述实景拍摄图像中的目标图像,确定所述目标图像在所述实景拍摄图像中的显示位置;
播放模块,用于获取与所述目标图像相关联的目标视频,并在所述目标图像在实景拍摄图像中的显示位置处播放所述目标视频。
可选实施例中,该视频播放装置还包括:第一交互模块;
该第一交互模块用于响应用户对播放的目标视频的触发操作,确定被触发的与所述目标视频相关联的信息,以供所述播放模块对所述信息进行展示。
可选实施例中,该视频播放装置还包括:第二交互模块;
所述第二交互模块用于接收用户上传的待关联的至少一组目标图像和目标视频,将所述待关联的至少一组目标图像和目标视频上传至服务器,以由所述服务器对所述待关联的目标图像和目标视频进行关联存储。
可选实施例中,播放模块在获取与所述目标图像相关联的目标视频时,具体用于:将所述目标图像发送至服务器,接收服务器返回的与所述目标图像相关联的目标视频。
可选实施例中,实景拍摄图像中包括多个目标图像;所述播放模块在获得与所述目标图像相关联的目标视频,并在所述目标图像在所述实景拍摄图像中的显示位置处播放所述目标视频时,具体用于:按照与所述多个目标图像相关联的目标视频的获取顺序,将与所述多个目标图像相关联的目标视频存储在预设的视频播放列表中;根据所述视频播放列表中各目标视频的存储顺序,从所述视频播放列表中确定至少一个目标视频作为待播放的目标视频;在与所述待播放的目标视频相关联的目标图像的显示位置处,播放所述待播放的目标视频。
可选实施例中,视频播放列表中还存储有目标视频的获取时间;播放模块按照与所述多个目标图像相关联的目标视频的获取顺序,将与所述多个目标图像相关联的目标视频存储在预设的视频播放列表中时,具体用于:按照各目标视频的获取时间的倒序,将所述各目标视频存储在所述视频播放列表中。
可选实施例中,所述播放模块在播放所述待播放的目标视频时,具体用于:对各待播放的目标视频进行解码处理,获得各待播放的目标视频的音频数据和视频数据;播放各待播放的目标视频的视频数据;以及,播放最近获取并存储在所述视频播放列表中的待播放的目标视频的音频数据。
可选实施例中,所述播放模块在所述目标图像在所述实景拍摄图像中的显示位置处播放所述目标视频时,具体用于:根据所述显示位置,确定所述实景拍摄图像中相应的播放区域;根据播放区域对所述目标视频进行视频预处理,并在所述播放区域中播放所述目标视频。
可选实施例中,所述播放模块在根据播放区域对所述目标视频进行视频预处理,并在所述播放区域中播放所述目标视频时,具体用于:根据所述播放区域在所述实景拍摄图像中的空间特征,对所述目标视频的视频数据进行三维空间渲染处理,以在所述播放区域中播放所述目标视频。
可选实施例中,所述显示位置包括:图像边缘位置,或/和,图像顶点位置。
可选实施例中,所述播放模块在播放所述目标视频时,具体用于:获取所述目标视频的播放进度;根据所述播放进度播放所述目标视频。
第三方面,根据本公开的一个或多个实施例,一种电子设备,包括:至少一个处理器和存储器;
所述存储器存储计算机执行指令;
所述至少一个处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处理器执行如前任一项所述的视频播放方法。
第四方面,根据本公开的一个或多个实施例,一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如前任一项所述的视频播放方法。
第五方面,根据本公开的一个或多个实施例,一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时,实现如前任一项所述的视频播放方法。
第六方面,根据本公开的一个或多个实施例,一种计算机程序,所述计算机程序被处理器执行时,用于实现如前任一项所述的视频播放方法。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的实施例形式。

Claims (26)

  1. 一种视频播放方法,其特征在于,包括:
    获得实景拍摄图像;
    检测所述实景拍摄图像中的目标图像;
    确定所述目标图像在所述实景拍摄图像中的显示位置;
    获取与所述目标图像相关联的目标视频,并在所述目标图像在所述实景拍摄图像中的显示位置处播放所述目标视频。
  2. 根据权利要求1所述的视频播放方法,其特征在于,还包括:
    响应用户对播放的目标视频的触发操作,确定被触发的与所述目标视频相关联的信息;
    对所述信息进行展示。
  3. 根据权利要求1或2所述的视频播放方法,其特征在于,还包括:
    接收用户上传的待关联的至少一组目标图像和目标视频;
    将所述待关联的至少一组目标图像和目标视频上传至服务器,以由所述服务器对所述待关联的目标图像和目标视频进行关联存储。
  4. 根据权利要求3所述的视频播放方法,其特征在于,所述获取与所述目标图像相关联的目标视频,包括:
    将所述目标图像发送至所述服务器;
    接收所述服务器返回的与所述目标图像相关联的目标视频。
  5. 根据权利要求1-4任一项所述的视频播放方法,其特征在于,所述实景拍摄图像中包括多个目标图像;
    所述获得与所述目标图像相关联的目标视频,并在所述目标图像在所述实景拍摄图像中的显示位置处播放所述目标视频,包括:
    按照与所述多个目标图像相关联的目标视频的获取顺序,将与所述多个目标图像相关联的所述目标视频存储在预设的视频播放列表中;
    根据所述视频播放列表中各目标视频的存储顺序,从所述视频播放列表中确定至少一个目标视频作为待播放的目标视频;
    在与所述待播放的目标视频相关联的目标图像的显示位置处,播放所述待播放的目标视频。
  6. 根据权利要求5所述的视频播放方法,其特征在于,所述视频播放列表中还存储有目标视频的获取时间;
    所述按照与所述多个目标图像相关联的目标视频的获取顺序,将与所述多个目标图像相关联的目标视频存储在预设的视频播放列表中,包括:
    按照各目标视频的获取时间的倒序,将所述各目标视频存储在所述视频播放列表中。
  7. 根据权利要求5或6所述的视频播放方法,其特征在于,所述播放所述待播放的目标视频,包括:
    对各待播放的目标视频进行解码处理,获得所述各待播放的目标视频的音频数据和视频数据;
    播放所述各待播放的目标视频的视频数据;以及
    播放最近获取并存储在所述视频播放列表中的待播放的目标视频的音频数据。
  8. 根据权利要求1-7任一项所述的视频播放方法,其特征在于,所述在所述目标图像在所述实景拍摄图像中的显示位置处播放所述目标视频,包括:
    根据所述显示位置,确定所述实景拍摄图像中相应的播放区域;
    根据所述播放区域对所述目标视频进行视频预处理,并在所述播放区域中播放所述目标视频。
  9. 根据权利要求8所述的视频播放方法,其特征在于,所述根据播放区域对所述目标视频进行视频预处理,并在所述播放区域中播放所述目标视频,包括:
    根据所述播放区域在所述实景拍摄图像中的空间特征,对所述目标视频的视频数据进行三维空间渲染处理,以在所述播放区域中播放所述目标视频。
  10. 根据权利要求8或9所述的视频播放方法,其特征在于,所述显示位置包括:图像边缘位置,或/和,图像顶点位置。
  11. 根据权利要求1-10任一项所述的视频播放方法,其特征在于,所述播放所述目标视频,包括:
    获取所述目标视频的播放进度;
    根据所述播放进度播放所述目标视频。
  12. 一种视频播放装置,其特征在于,包括:
    获取模块,用户获取实景拍摄图像;
    处理模块,用于检测所述实景拍摄图像中的目标图像,确定所述目标图像在所述实景拍摄图像中的显示位置;
    播放模块,用于获取与所述目标图像相关联的目标视频,并在所述目标图像在实景拍摄图像中的显示位置处播放所述目标视频。
  13. 根据权利要求12所述的视频播放装置,其特征在于,所述视频播放装置还包括第一交互模块,所述第一交互模块用于响应用户对播放的目标视频的触发操作,确定被触发的与所述目标视频相关联的信息;所述播放模块还用于对所述信息进行展示。
  14. 根据权利要求12或13所述的视频播放装置,其特征在于,所述视频播放装置还包括第二交互模块,所述第二交互模块用于接收用户上传的待关联的至少一组目标图像和目标视频;将所述待关联的至少一组目标图像和目标视频上传至服务器,以由所述服务器对所述待关联的目标图像和目标视频进行关联存储。
  15. 根据权利要求14所述的视频播放装置,其特征在于,所述播放模块在获取与所述目标图像相关联的所述目标视频时,具体用于:将所述目标图像发送所述至服务器,接收所述服务器返回的与所述目标图像相关联的目标视频。
  16. 根据权利要求12-15任一项所述的视频播放装置,其特征在于,所述实景拍摄图像中包括多个目标图像;所述播放模块在获得与所述目标图像相关联的所述目标视频,并在所述目标图像在所述实景拍摄图像中的显示位置处播放所述目标视频时,具体用于:
    按照与所述多个目标图像相关联的目标视频的获取顺序,将与所述多个目标图像相关联的所述目标视频存储在预设的视频播放列表中;
    根据所述视频播放列表中各目标视频的存储顺序,从所述视频播放列表中确定至少一个目标视频作为待播放的目标视频;
    在与所述待播放的目标视频相关联的目标图像的显示位置处,播放所述待播放的目标视频。
  17. 根据权利要求16所述的视频播放装置,其特征在于,所述视频播放列表中还存储有目标视频的获取时间;所述播放模块在按照与所述多个目标图像相关联的目标视频的获取顺序,将与所述多个目标图像相关联的目标视频存储在预设的视频播放列表中时,具体用于:
    按照各目标视频的获取时间的倒序,将所述各目标视频存储在所述视频播放列表中。
  18. 根据权利要求16或17所述的视频播放装置,其特征在于,所述播放模块在播放所述待播放的目标视频时,具体用于:
    对各待播放的目标视频进行解码处理,获得所述各待播放的目标视频的音频数据和视频数据;
    播放所述各待播放的目标视频的视频数据;以及,
    播放最近获取并存储在所述视频播放列表中的待播放的目标视频的音频数据。
  19. 根据权利要求12-18任一项所述的视频播放装置,其特征在于,所述播放模块在所述目标图像在所述实景拍摄图像中的显示位置处播放所述目标视频时,具体用于:
    根据所述显示位置,确定所述实景拍摄图像中相应的播放区域;
    根据所述播放区域对所述目标视频进行视频预处理,并在所述播放区域中播放所述目标视频。
  20. 根据权利要求19所述的视频播放装置,其特征在于,所述播放模块在根据所述播放区域对所述目标视频进行视频预处理,并在所述播放区域中播放所述目标视频时,具体用于:
    根据所述播放区域在所述实景拍摄图像中的空间特征,对所述目标视频的视频数据进行三维空间渲染处理,以在所述播放区域中播放所述目标视频。
  21. 根据权利要求19或20所述的视频播放装置,其特征在于,所述显示位置包括:图像边缘位置,或/和,图像顶点位置。
  22. 根据权利要求12-21任一项所述的视频播放装置,其特征在于,所述播放模块在播放所述目标视频时,具体用于:
    获取所述目标视频的播放进度;
    根据所述播放进度播放所述目标视频。
  23. 一种电子设备,其中,包括:
    至少一个处理器;以及
    存储器;
    所述存储器存储计算机执行指令;
    所述至少一个处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处理器执行如权利要求1-11任一项所述的视频播放方法。
  24. 一种计算机可读存储介质,其中,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如权利要求1-11任一项所述的视频播放方法。
  25. 一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时,实现如权利要求1-11任一项所述的视频播放方法。
  26. 一种计算机程序,所述计算机程序被处理器执行时,实现如权利要求1-11任一项所述的视频播放方法。
PCT/CN2021/115208 2020-10-28 2021-08-30 视频播放方法、装置、电子设备及存储介质 WO2022088908A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/250,505 US20240062479A1 (en) 2020-10-28 2021-08-30 Video playing method and apparatus, electronic device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011173352.9A CN112288877A (zh) 2020-10-28 2020-10-28 视频播放方法、装置、电子设备及存储介质
CN202011173352.9 2020-10-28

Publications (1)

Publication Number Publication Date
WO2022088908A1 true WO2022088908A1 (zh) 2022-05-05

Family

ID=74372374

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/115208 WO2022088908A1 (zh) 2020-10-28 2021-08-30 视频播放方法、装置、电子设备及存储介质

Country Status (3)

Country Link
US (1) US20240062479A1 (zh)
CN (1) CN112288877A (zh)
WO (1) WO2022088908A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112288877A (zh) * 2020-10-28 2021-01-29 北京字节跳动网络技术有限公司 视频播放方法、装置、电子设备及存储介质
CN114615426A (zh) * 2022-02-17 2022-06-10 维沃移动通信有限公司 拍摄方法、装置、电子设备和可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018007779A1 (en) * 2016-07-08 2018-01-11 Sony Interactive Entertainment Inc. Augmented reality system and method
CN110809187A (zh) * 2019-10-31 2020-02-18 Oppo广东移动通信有限公司 视频选择方法、视频选择装置、存储介质与电子设备
CN111339365A (zh) * 2018-12-19 2020-06-26 北京奇虎科技有限公司 一种视频展示方法和装置
CN111337015A (zh) * 2020-02-28 2020-06-26 重庆特斯联智慧科技股份有限公司 一种基于商圈聚合大数据的实景导航方法与系统
CN111833460A (zh) * 2020-07-10 2020-10-27 北京字节跳动网络技术有限公司 增强现实的图像处理方法、装置、电子设备及存储介质
CN112288877A (zh) * 2020-10-28 2021-01-29 北京字节跳动网络技术有限公司 视频播放方法、装置、电子设备及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013050883A (ja) * 2011-08-31 2013-03-14 Nintendo Co Ltd 情報処理プログラム、情報処理システム、情報処理装置および情報処理方法
CN103346955B (zh) * 2013-06-18 2016-08-24 腾讯科技(深圳)有限公司 一种图像处理方法、装置及终端
CN105989628A (zh) * 2015-02-06 2016-10-05 北京网梯科技发展有限公司 通过移动终端获取信息的方法及系统设备
CN109168034B (zh) * 2018-08-28 2020-04-28 百度在线网络技术(北京)有限公司 商品信息显示方法、装置、电子设备和可读存储介质
CN111273775A (zh) * 2020-01-16 2020-06-12 Oppo广东移动通信有限公司 增强现实眼镜、基于增强现实眼镜的ktv实现方法与介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018007779A1 (en) * 2016-07-08 2018-01-11 Sony Interactive Entertainment Inc. Augmented reality system and method
CN111339365A (zh) * 2018-12-19 2020-06-26 北京奇虎科技有限公司 一种视频展示方法和装置
CN110809187A (zh) * 2019-10-31 2020-02-18 Oppo广东移动通信有限公司 视频选择方法、视频选择装置、存储介质与电子设备
CN111337015A (zh) * 2020-02-28 2020-06-26 重庆特斯联智慧科技股份有限公司 一种基于商圈聚合大数据的实景导航方法与系统
CN111833460A (zh) * 2020-07-10 2020-10-27 北京字节跳动网络技术有限公司 增强现实的图像处理方法、装置、电子设备及存储介质
CN112288877A (zh) * 2020-10-28 2021-01-29 北京字节跳动网络技术有限公司 视频播放方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
US20240062479A1 (en) 2024-02-22
CN112288877A (zh) 2021-01-29

Similar Documents

Publication Publication Date Title
WO2022088918A1 (zh) 虚拟图像的显示方法、装置、电子设备及存储介质
US11265603B2 (en) Information processing apparatus and method, display control apparatus and method, reproducing apparatus and method, and information processing system
CN106803966B (zh) 一种多人网络直播方法、装置及其电子设备
US20190030441A1 (en) Using a Portable Device to Interface with a Scene Rendered on a Main Display
US9384588B2 (en) Video playing method and system based on augmented reality technology and mobile terminal
US20140240444A1 (en) Systems and methods for real time manipulation and interaction with multiple dynamic and synchronized video streams in an augmented or multi-dimensional space
WO2023051185A1 (zh) 图像处理方法、装置、电子设备及存储介质
WO2022088908A1 (zh) 视频播放方法、装置、电子设备及存储介质
WO2021184952A1 (zh) 增强现实处理方法及装置、存储介质和电子设备
US10560752B2 (en) Apparatus and associated methods
WO2022062643A1 (zh) 游戏直播互动方法及装置
WO2022007565A1 (zh) 增强现实的图像处理方法、装置、电子设备及存储介质
CN109600559B (zh) 一种视频特效添加方法、装置、终端设备及存储介质
CN112291590A (zh) 视频处理方法及设备
US20190130193A1 (en) Virtual Reality Causal Summary Content
WO2022037484A1 (zh) 图像处理方法、装置、设备及存储介质
US20180048877A1 (en) File format for indication of video content
WO2022132033A1 (zh) 基于增强现实的显示方法、装置、设备及存储介质
WO2023103720A1 (zh) 视频特效处理方法、装置、电子设备及程序产品
US20190075232A1 (en) Shared experiences in panoramic video
US20240155074A1 (en) Movement Tracking for Video Communications in a Virtual Environment
JP2018033107A (ja) 動画の配信装置及び配信方法
CN107197339B (zh) 影片弹幕的显示控制方法、装置及头戴式显示设备
WO2022227918A1 (zh) 视频处理方法、设备及电子设备
CN108985275B (zh) 增强现实设备及电子设备的显示追踪方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21884640

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18250505

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.08.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21884640

Country of ref document: EP

Kind code of ref document: A1