WO2022088908A1 - 视频播放方法、装置、电子设备及存储介质 - Google Patents
视频播放方法、装置、电子设备及存储介质 Download PDFInfo
- Publication number
- WO2022088908A1 WO2022088908A1 PCT/CN2021/115208 CN2021115208W WO2022088908A1 WO 2022088908 A1 WO2022088908 A1 WO 2022088908A1 CN 2021115208 W CN2021115208 W CN 2021115208W WO 2022088908 A1 WO2022088908 A1 WO 2022088908A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- target
- image
- target video
- played
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 99
- 238000004590 computer program Methods 0.000 claims abstract description 25
- 238000012545 processing Methods 0.000 claims description 24
- 230000003993 interaction Effects 0.000 claims description 14
- 238000007781 pre-processing Methods 0.000 claims description 13
- 238000009877 rendering Methods 0.000 claims description 10
- 230000001960 triggered effect Effects 0.000 claims description 9
- 230000004044 response Effects 0.000 claims description 7
- 230000003190 augmentative effect Effects 0.000 abstract description 18
- 230000002452 interceptive effect Effects 0.000 abstract description 7
- 230000000007 visual effect Effects 0.000 abstract description 7
- 238000002360 preparation method Methods 0.000 abstract description 6
- 238000010586 diagram Methods 0.000 description 20
- 238000005516 engineering process Methods 0.000 description 20
- 230000006870 function Effects 0.000 description 11
- 230000008569 process Effects 0.000 description 9
- 238000010276 construction Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 239000013307 optical fiber Substances 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000004140 cleaning Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000011664 signaling Effects 0.000 description 2
- 101000822695 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C1 Proteins 0.000 description 1
- 101000655262 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C2 Proteins 0.000 description 1
- 101000655256 Paraclostridium bifermentans Small, acid-soluble spore protein alpha Proteins 0.000 description 1
- 101000655264 Paraclostridium bifermentans Small, acid-soluble spore protein beta Proteins 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/434—Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
- H04N21/4341—Demultiplexing of audio and video streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47217—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
Definitions
- the embodiments of the present disclosure relate to the field of computers, and in particular, to a video playback method, apparatus, electronic device, storage medium, computer program product, and computer program.
- Augmented Reality (AR) technology is a technology that skillfully integrates virtual information with the real world.
- Information presentation using augmented reality has become a possible information presentation method.
- a 3D modeling image of a virtual character or a virtual scene to be presented when a user uses a terminal to capture a real scene, the captured image of the real scene including the 3D modeling image can be simultaneously obtained.
- embodiments of the present disclosure provide a video playback method, apparatus, electronic device, storage medium, computer program product, and computer program.
- an embodiment of the present disclosure provides a video playback method, including:
- a target video associated with the target image is acquired, and the target video is played at the display position of the target image in the live-action image.
- an embodiment of the present disclosure provides a video playback device, comprising:
- a processing module configured to obtain a real-scene shooting image, detect a target image in the real-scene shooting image, and determine a display position of the target image in the real-scene shooting image
- a playing module is configured to acquire a target video associated with the target image, and play the target video at the display position of the target image in the real-life shot image.
- embodiments of the present disclosure provide an electronic device, including: at least one processor and a memory;
- the memory stores computer-executable instructions
- the at least one processor executes the computer-executable instructions stored in the memory, so that the at least one processor executes the video playback method described in the first aspect and various possible designs of the first aspect above.
- embodiments of the present disclosure provide a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and when a processor executes the computer-executable instructions, the first aspect and the first The video playback method described in the aspect of various possible designs.
- embodiments of the present disclosure provide a computer program product, including a computer program that, when executed by a processor, implements the video playback method described in the first aspect and various possible designs of the first aspect.
- embodiments of the present disclosure provide a computer program, which, when executed by a processor, is used to implement the video playback method described in the first aspect and various possible designs of the first aspect.
- Embodiments of the present disclosure provide a video playback method, device, electronic device, storage medium, computer program product, and computer program.
- the method includes: obtaining a live-action shot image, detecting a target image in the live-action shot image; determining the target the display position of the image in the live-action shot image; acquire a target video associated with the target image, and play the target video at the display position of the target image in the live-action shot image.
- the video playback method provided in this embodiment can reduce the presentation cost and preparation period when using the augmented reality display technology to present information, and on the other hand, it also provides users with more presentation channels for presenting video information. Enable users to get a better interactive experience and visual experience.
- FIG. 1 is a schematic diagram of a network architecture on which an embodiment of the disclosure is based;
- FIG. 2 is a schematic flowchart of a video playback method according to an embodiment of the present disclosure
- FIG. 3 is a schematic diagram of a first interface of a video playback method according to an embodiment of the present disclosure
- FIG. 4 is a signaling interaction diagram of a video playback method according to an embodiment of the present disclosure.
- FIG. 5 is a schematic diagram of a second interface of a video playback method provided by an embodiment of the present disclosure
- FIG. 6 is a schematic diagram of a third interface of a video playback method provided by an embodiment of the present disclosure.
- FIG. 7 is a schematic flowchart of another video playback method provided by an embodiment of the present disclosure.
- FIG. 8 is a schematic diagram of a fourth interface of a video playback method provided by an embodiment of the present disclosure.
- FIG. 9 is a structural block diagram of a video playback device provided by an embodiment of the present disclosure.
- FIG. 10 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure.
- Augmented Reality (AR) technology is a technology that skillfully integrates virtual information with the real world.
- the terminal When displaying the augmented reality, the terminal will first shoot the real scene of the real scene to obtain the current real scene shooting image. Then, the augmented reality technology is used to process the real-scene captured image, so as to superimpose the preset virtual information on the real-scene captured image, and present the superimposed image to the user.
- the virtual information superimposed on the real shot image is generally a pre-established 3D modeling image of a virtual character and a virtual scene.
- the construction process of the 3D modeling image is relatively complicated, and the technical cost and labor cost required for construction are relatively high. This will result in a longer period and higher cost for presenting each piece of information when information is presented using the augmented reality display technology.
- the inventor creatively found after research that the information presentation method is not only limited to 3D modeling images, but can also be replaced by a method with lower technical and labor costs.
- some existing video data is placed in the real-life shot image, and displayed by means of augmented reality display. In this way, it is possible to reduce the preparation cycle and cost when using the augmented reality display technology to present information.
- it also provides users with more presentation channels to present video information, so that users can get better interactive and visual experience.
- FIG. 1 is a schematic diagram of a network architecture on which an embodiment of the disclosure is based.
- the network architecture shown in FIG. 1 may specifically include a terminal 1 and a server 2 .
- the terminal 1 may specifically be a user's mobile phone, a smart home device, a tablet computer, a wearable device, or other hardware devices that can be used to capture and display the real scene.
- the device is hardware or software for executing the video playback method of the present disclosure.
- the video playback device can provide the terminal 1 with an augmented reality display page, and the terminal 1 uses its screen or display components to display to the user the information provided by the video playback device. Showcase page for augmented reality display.
- the server 2 may specifically be a server or server cluster set in the cloud, and the server or server cluster may store video data, image data, etc. related to the video playback method provided by the present disclosure.
- the video playback device may also utilize the network components of the terminal 1 to interact with the server 2, acquire image data and video data stored in the server 2, and perform corresponding processing and display .
- the architecture shown in FIG. 1 is applicable to the field of information presentation, in other words, it can be used for information presentation in various scenarios.
- the video playback method provided by the present disclosure can be applied to game scenarios based on augmented reality display.
- the video playback method provided by the present disclosure can be used to achieve For the push and presentation of "clue” videos during the "treasure hunt” process.
- the video playback method provided by the present disclosure can be applied to an advertising scenario based on augmented reality display.
- the video playback method provided by the present disclosure can be used to realize the presentation of related videos for these commodities, thereby Provide users with more information about the product to improve user experience.
- the video playback method provided by the present disclosure can also be used to play video information, so as to present more information about the scene to the user and increase the user's interactive experience
- the terminal camera needs to be turned on for real-time shooting. for the presentation of information.
- FIG. 2 is a schematic flowchart of a video playback method provided by an embodiment of the present disclosure.
- a video playback method provided by an embodiment of the present disclosure includes:
- Step 101 Obtain a real-life captured image, and detect a target image in the real-world captured image;
- Step 102 determining the display position of the target image in the live-action shot image
- Step 103 Acquire a target video associated with the target image, and play the target video at the display position of the target image in the live-action captured image.
- the execution body of the processing method provided in this embodiment is the aforementioned video playback device, and in some embodiments of the present disclosure, it specifically refers to a client or a display terminal that can be installed or integrated on a terminal.
- the user can operate the video playback device through the terminal, so that the video playback device can respond to the operation triggered by the user.
- FIG. 3 is a schematic diagram of a first interface of a video playback method provided by an embodiment of the present disclosure.
- the video playback device will obtain a real-life captured image, which may be an image obtained by the terminal calling its own capturing component to capture the current environment, or may be obtained by the video playback device through other means real-time images of the scene.
- the video playback device will perform image recognition in the real-life shot image to determine whether there is a target image that can be used for video playback in the live-action shot image.
- the recognition of the target image in the real-life shot image by the video playback device can be realized by the image recognition technology.
- the target image may be a two-dimensional plane image, and the corresponding display position may be the position where the two-dimensional plane graphic is located.
- the target image may also be an image of a three-dimensional object, and the corresponding display position may be a projection position of the three-dimensional object on a two-dimensional plane, and so on.
- the image recognition technology according to the embodiment of the present disclosure can be implemented based on the two-dimensional image recognition technology, that is, by using the image recognition technology, the image recognition technology can be used for the projection surface including the preset plane picture, the three-dimensional object, and the plane picture or plane with certain deformation. Image for image recognition.
- the embodiments according to the present disclosure can be implemented by using an object recognition technology.
- the present disclosure does not limit the specific image recognition technology.
- the video playback device can detect the display position of the target image in the real-world captured image in the real-world captured image.
- the target image may include the image in the image frame described in FIG. 3 , and correspondingly, the display position may be framed by means of the image frame 301 , and the image frame 301 is used to represent the image frame of the target video. play area.
- the display position of the target image may be specifically determined by the image position of the target image, and the image position may include, but is not limited to, the image edge position of the target image, the image vertex position, and the like.
- the video playback device may determine a target video associated with the target image, and play the target video at the display position where the target image is located in the live-action captured image.
- the corresponding play area in the live-action shot image may be determined first according to the display position, and then video preprocessing is performed on the target video according to the play area, and the target video is played in the play area.
- the above-mentioned display position refers to the range occupied by the image of the target image in the real-life shot image, which may include the position of the edge of the image, or/and the position of the vertex of the image.
- the corresponding image frame can be divided in the real-life shot image according to the image edge position of the target image, or/and the image vertex position, as the play area (301 as shown in FIG. 3 ).
- three-dimensional space rendering processing may be performed on the video data of the target video according to the spatial characteristics of the playback area in the real-life shot image, so as to be played in the playback area (301 shown in FIG. 3 ).
- the target video (302 shown in Figure 3).
- the spatial feature is used to represent the spatial position attribute of the playback area in the three-dimensional space of the live-action image, such as the spatial position coordinates of the vertices of the playback area, the spatial position coordinates of the edge of the playback area, the plane where the playback area is located and the shooting surface of the live-action image. space angle information, etc.
- three-dimensional space rendering processing can be performed on the video data of the target video, so that the rendered video picture of the target video can fit the play area in three-dimensional space.
- the three-dimensional space rendering processing may include pixel coordinate mapping processing, that is, the two-dimensional pixel coordinates of each pixel in the video image of the target video are mapped to the three-dimensional coordinates of the playback area by spatial mapping.
- the three-dimensional space rendering process may also be implemented in other existing manners, which are not limited in this application.
- the association relationship between the target video and the target image is further established in advance and stored in the aforementioned server.
- FIG. 4 is a signaling interaction diagram of a video playback method provided by an embodiment of the present disclosure.
- the association relationship between the target image and the target video is pre-built in the server, and the pre-construction method of the association relationship can be seen in Figure 4:
- the terminal can obtain images and videos through various channels, and then upload the target images and videos to be associated to the server through an interface (as shown in FIG. 5 ).
- the acquisition method of the pictures and videos may be acquired by the terminal shooting, or may be downloaded by the terminal through the network, or acquired from other terminals by means of near-field transmission, which is not limited in the present disclosure.
- the server will associate and store the two to determine the association relationship between the two.
- a storage list of association relationships can be pre-stored in the server to store the association relationships between images and videos uploaded by different terminals to be associated. The specific storage method will be described in the following embodiments. There is no restriction on this.
- the terminal After the terminal starts the camera, it will execute the video playback method in the aforementioned manner, that is, the terminal will shoot the real scene, obtain the corresponding real scene shooting image, and then perform image recognition on the real scene shooting image to obtain the real scene shooting image. Then, determine the display position of the target image and send the target image to the server.
- the server will determine the target video corresponding to the target image according to the pre-established association relationship in the storage list, and send the target video to the terminal. Finally, after the terminal receives the target video, the terminal will play the target video at the display position of the target image in the real-life shot image.
- FIG. 4 shows that the terminal for uploading the target image to be associated and the target video is the same terminal as the terminal that uses this solution to play the video
- the uploading target image to be associated and the target video are the same terminal.
- the terminal of the target video and the terminal that uses this solution to play the video may also be different terminals. That is, this application does not impose any restrictions on whether the terminal that uploads the target image and target video to be associated and the terminal that uses this solution to play the video are the same terminal, and those skilled in the art can determine by themselves based on the actual scene.
- the target video associated with the target image can be played directly, without the need for modeling processing of 3D virtual modeling, therefore, for all information that needs to be presented For the user, the following operations can be used to quickly associate the target image and the target video, so that more users can obtain the information that the information owner wants to present, and the cost and preparation difficulty are greatly reduced.
- FIG. 5 is a schematic diagram of a second interface of a video playback method provided by an embodiment of the present disclosure.
- the user information owner
- the video playback device will determine the target image and target video to be associated in response to the uploading operation triggered by the user (information owner).
- the video playback device will upload the target image and the target video to be associated to the server, so that the server can associate and store the target image and the target video to be associated.
- identification IDs can be set for images and videos respectively, and the target image and the target video to be associated are stored by storing the two identification IDs correspondingly. That is to say, the specific implementation of the terminal sending the target image to the server can be that the terminal sends the identification ID of the target image to the server, so that the server can find the target with the corresponding identification ID from a large number of pre-stored images according to the identification ID of the target image. image, and send the corresponding target video.
- the target image and target video to be associated can also be encrypted and decrypted with a symmetric key for storage.
- the target image can be processed to obtain a unique key. , and use this key to encrypt and store the target video associated with the target image; in subsequent use, the target image can be processed again to obtain the aforementioned unique key, which can be used to extract several videos from the stored Find a video that it can decrypt, which will be the target video.
- the user After completing the above configuration of the target image and target video, the user (information receiver) will be able to view the target video (video 3) associated with the target image (image 2) on the terminal through the methods provided in the foregoing embodiments. ).
- the terminal When in use, after sending the detected target image to the server, the terminal will receive the target video returned by the server and associated with the target image.
- the terminal can send the image data of the target image to the server, and can also analyze and process the image data of the target image to obtain the image identification ID of the target image and/or the target image. image features and send them to the server.
- the server can determine the corresponding target video according to the pre-established association relationship, and send the corresponding target video to the terminal for display.
- the user when receiving the target image to be associated and the target video uploaded by the user, the user can simultaneously upload at least one group of the target image and target video to be associated.
- different groups of target images and target videos to be associated can be associated and marked with different identifiers, so that the server can associate and store different groups of target images and target videos to be associated.
- the configuration efficiency of the target image and target video to be associated can be greatly improved.
- video information presentation for multiple target images can be implemented, which can further increase the user's interactive experience.
- the user who uploads the target image and the target video to be associated can be the same or different from the user who uses this solution to play the video.
- the user who uploads the target image and target video to be associated can specifically be a product promoter, or a commodity Promoters of videos; users who use this solution to play videos can specifically be product users, product video recipients or viewers, etc.
- FIG. 6 is a schematic diagram of a third interface of a video playback method provided by an embodiment of the present disclosure.
- the target image may include an image of a three-dimensional object in the live-action image.
- the two-dimensional plane formed by the projection of the surface of the three-dimensional object can be used as the display position of the target image, and the video associated with the target image is displayed at the display position.
- the video associated with the vase will be played on the projection plane of the vase.
- the video playback device will continue to track the display position of the target image, so as to be able to adjust the playback of the target video accordingly according to the change of the display position of the target image position, so that the target video is played at the real-time display position of the target image. Therefore, as shown in FIG. 6 , the target image is the vase in FIG. 6 as an example for description. When the shooting angle of the live shot image changes (for example, the angle of shooting the vase changes), the display position of the identified target image will also change accordingly. At this time, the video playback device will adjust the position and the position of the image frame in real time. size and display the target video in the resized image box (play area).
- the side surface (projection surface) of the target image vase (three-dimensional object) is photographed at a vertical angle.
- the display position of the target image vase can be identified, and the display position Then, rotate the camera to adjust the shooting angle.
- the display position of the side surface (projection surface) of the target image vase (three-dimensional object) has been adjusted to a certain extent, and the The corresponding target video is displayed at the adjusted display position.
- the user may perform a trigger operation on the target video, and the display interface of the video playback device will be switched so that the information associated with the target video is displayed. That is, the video playback apparatus will, in response to the user's triggering operation on the target video being played, determine the triggered information associated with the target video, and display the information.
- the information associated with the target video may be web page information, other image information, program information of other application programs, and the like.
- the target image is an image of a product
- the target video shows how to use a product of a certain brand
- the associated information can be a webpage introduction of the product of the brand, or the purchase of the product of the brand in an online store of an application.
- the target image is an image of a scenic spot
- the target video is a promotional video of a scenic spot
- the associated information can be more scenic spots pictures of the scenic spot, or the page introduction of the scenic spot on the official website, and also It can be the ticket purchase information of the attraction and so on.
- the video playback device also supports breakpoint playback of the target video, that is, when the display position of the target image corresponding to a certain target video is lost in the live-action captured image (for example, at this time, the target video plays to 00':30"), then when the display position of the target image is retrieved within a preset time period, the video playback device can continue to play the target video from the last playback progress (ie 00':30") of the target video. The target video is played (continue from 00':30").
- the video playback device when playing the target video, the video playback device will also acquire the playback progress of the target video; and play the target video according to the playback progress.
- the video playback device may store the playback progress of the target video, such as current playback time information, and the like. Each time the target video is played, the video playback device can first extract the current playback time information of the target video and other playback progress, then determine the start time of the current playback, and finally play the target video from the start time. In this way, the breakpoint playback function for the target video is realized, and the user's audiovisual experience is further improved.
- the number of target images obtained by recognizing the real-life shot images may be multiple. That is to say, when the video playback device performs image recognition on a real-life shot image, it can recognize multiple target images in the image, and simultaneously play the target videos of the multiple target images.
- FIG. 7 is a schematic flowchart of another video playback method provided by an embodiment of the present disclosure. As shown in FIG. 7 , the method includes:
- Step 201 obtaining a real-scene photographed image
- Step 202 determining the display position of each target image in the live-action shot image
- Step 203 according to the acquisition sequence of the target videos associated with the plurality of target images, store the target videos associated with the plurality of target images in a preset video playlist;
- Step 204 according to the storage order of each target video in the video playlist, determine at least one target video from the video playlist as the target video to be played;
- Step 205 Play the target video to be played at the display position of the target image associated with the target video to be played.
- the execution body of the processing method provided in this embodiment is the aforementioned video playback device, and in some embodiments of the present disclosure, it specifically refers to a client or a display terminal that can be installed or integrated on a terminal.
- the user can operate the video playback device through the terminal, so that the video playback device can respond to the operation triggered by the user.
- a plurality of target images will be included in the real-scene shooting image, and the video playback device will determine the display position of each target image in the real-scene shooting image, and each target image will be displayed.
- the associated target video is played at its corresponding display position.
- the number of target videos may be large. If a large number of target videos are played at the same time, the terminal may freeze. Therefore, in order to obtain a better video playback effect and to bring a better visual experience to the user, a video playlist may be set in the video playback device, which is used to set the playback in the real-life captured images at the same time. number of target videos.
- FIG. 8 is a schematic diagram of a fourth interface of a video playback method provided by an embodiment of the present disclosure.
- the video playback device obtains a live-action shot image, then performs image recognition on multiple target images in the live-action shot image, and uses a plurality of image frames to display each target image in the live-action shot image The positions are framed in turn to obtain several image frames (playing areas).
- the playback area corresponding to the target image A is 801
- the playback area corresponding to the target image B is 802
- the playback area corresponding to the target image C is 803 .
- the video playback device will send each target image to the server, so that the server can determine the target video corresponding to each target image according to the preset association relationship, and return each target video to the video playback device.
- the video playback device After receiving the target videos, the video playback device stores the target videos in a video playlist.
- the video playback device will select one or more target videos as the target video to be played according to the storage order of the target videos stored in the video playlist, and put each target video in its associated Play in the playback area of the target image, for example, the target video 1 associated with the target image A is played in the corresponding playback area 801, and the target video 2 associated with the target image B is played in the corresponding playback area 802. , the target video 3 associated with the target image C is played in the corresponding play area 803 .
- the video playlist stores all the target videos obtained from the server from the moment when the real-life shooting images are obtained.
- the video playlist can also determine the cleaning period of the list, and according to the cleaning period, the targets stored in the list are sorted. Videos are cleaned up.
- the video playlist also stores the acquisition time of the target video.
- the target videos are stored in the video playlist according to the reverse order of the acquisition time of each target video. . In other words, the closer to the target video acquired at the current moment, the more it will be stored at the top of the video playlist, and will be played preferentially.
- a manner of playing audio and video separately may be adopted. For example, while playing the video images of multiple target videos to be played simultaneously, only the audio data of the only target video being played is played.
- each target video to be played when playing the target video, can be decoded first to obtain the audio data and video data of each target video to be played; the video data of each target video to be played is played; The audio data of the target video to be played recently acquired and stored in the video playlist.
- the audio data and the video data will be synchronously processed by using the audio and video synchronization technology to ensure the audio and video synchronization during playback.
- the user can only receive the sound from the same target video at the same time, and watch the pictures of multiple target videos.
- the audio-visual of the video information can be guaranteed. The experience is not affected.
- the video playback method provided by the embodiment of the present disclosure includes: obtaining a real-life captured image; detecting a target image in the real-life captured image; determining a display position of the target image in the real-life captured image; and play the target video at the display position of the target image in the live-action image.
- the video playback method provided in this embodiment can reduce the preparation period and cost when using the augmented reality display technology to present information. Users get better interactive experience and visual experience.
- FIG. 9 is a structural block diagram of a video playback apparatus provided by an embodiment of the present disclosure.
- the video playback device includes: an acquisition module 10 , a processing module 20 and a playback module 30 .
- the user acquires a real-life shot image.
- the acquisition module may include an image acquisition device configured by the video playback device itself, and acquire the real-scene shot image by capturing the real-scene image in real time.
- the acquisition module may also acquire the real-life captured images stored or captured from the server or the video playback device itself, which is not limited in the present disclosure.
- the processing module 20 is configured to detect a target image in the live-action shot image, and determine a display position of the target image in the live-action shot image.
- the playing module 30 is configured to acquire a target video associated with the target image, and play the target video at the display position of the target image in the real-life shot image.
- the video playback device further includes: a first interaction module; the first interaction module is configured to determine the triggered information associated with the target video in response to the user's triggering operation on the target video being played; The playing module 20 displays the information.
- the video playback device further includes: a second interaction module
- the second interaction module is configured to receive at least one group of target images and target videos to be associated uploaded by the user, and upload the at least one group of target images and target videos to be associated to the server, so that the server can compare the target images and videos to the server.
- the target image to be associated and the target video are associated and stored.
- the playback module 30 when acquiring the target video associated with the target image, is specifically configured to: send the target image to the server, and receive the target video associated with the target image returned by the server. .
- the live-action shot image includes multiple target images; the playback module 30 obtains a target video associated with the target image, and displays the target image in the display position of the live-action shot image.
- the target video it is specifically used for: according to the acquisition sequence of the target videos associated with the plurality of target images, store the target videos associated with the plurality of target images in a preset video playlist according to the storage order of each target video in the video playlist, determine at least one target video from the video playlist as the target video to be played; in the target image associated with the target video to be played At the display position, the target video to be played is played.
- the video playlist also stores the acquisition time of the target video; the playback module 30, in accordance with the acquisition sequence of the target videos associated with the plurality of target images, will associate the target images with the plurality of target images.
- the target video is stored in the preset video playlist, it is specifically used for: storing the target videos in the video playlist according to the reverse order of the acquisition time of each target video.
- the playback module 30 when playing the target video to be played, is specifically configured to: decode each target video to be played, and obtain audio data and video data of each target video to be played. ; play the video data of each target video to be played; and play the audio data of the target video to be played recently acquired and stored in the video playlist.
- the playback module 30 when the playback module 30 plays the target video at the display position of the target image in the live-action captured image, it is specifically configured to: determine the real-life captured image according to the display position. The corresponding play area in the play area; perform video preprocessing on the target video according to the play area, and play the target video in the play area.
- the playback module 30 when the playback module 30 performs video preprocessing on the target video according to the playback area, and plays the target video in the playback area, the playback module 30 is specifically configured to: perform video preprocessing in the target video according to the playback area. According to the spatial features in the real-life shot images, three-dimensional space rendering processing is performed on the video data of the target video, so as to play the target video in the playback area.
- the display positions include: image edge positions, or/and image vertex positions.
- the playing module 30 when playing the target video, is specifically configured to: acquire the playing progress of the target video; and play the target video according to the playing progress.
- the video playback device provided by the embodiment of the present disclosure is configured to perform the following methods: obtaining a real-life captured image; detecting a target image in the real-life captured image; determining a display position of the target image in the real-life captured image; The target video associated with the target image is played, and the target video is played at the display position of the target image in the real-life shooting image.
- the video playback device provided in this embodiment can reduce the use of augmented reality display technology.
- the electronic device provided in this embodiment can be used to implement the technical solutions of the foregoing method embodiments, and the implementation principles and technical effects thereof are similar, and details are not described herein again in this embodiment.
- the electronic device 900 may be a terminal device or a media library.
- the terminal equipment may include, but is not limited to, such as mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (Personal Digital Assistant, referred to as PDA), tablet computers (Portable Android Device, referred to as PAD), portable multimedia players (Portable Media Player, PMP for short), in-vehicle terminals (such as in-vehicle navigation terminals), mobile terminals such as wearable electronic devices, and stationary terminals such as digital TVs, desktop computers, smart home devices, and the like.
- PDA Personal Digital Assistant
- PAD Portable multimedia players
- PMP Portable Media Player
- in-vehicle terminals such as in-vehicle navigation terminals
- mobile terminals such as wearable electronic devices
- stationary terminals such as digital TVs, desktop computers, smart home devices, and the like.
- the electronic device shown in FIG. 10 is only an embodiment, and should not impose any limitation on the function and scope of use of the embodiment of the present disclosure
- the electronic device 900 may include a processor 901 for executing a video playback method (such as a central processing unit, a graphics processor, etc.), which may be stored in a read only memory (Read Only Memory, ROM for short) 902 according to the Various appropriate actions and processes are performed by the program in the storage device 908 or the program loaded into the random access memory (Random Access Memory, RAM for short) 903 from the storage device 908 . In the RAM 903, various programs and data necessary for the operation of the electronic device 900 are also stored.
- the video playback method 901, the ROM 902, and the RAM 903 are connected to each other through a bus 904.
- An input/output (I/O) interface 905 is also connected to bus 904 .
- an input device 906 including, for example, a touch screen, a touch pad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD for short) ), speaker, vibrator, etc. output device 907; storage device 908 including, eg, magnetic tape, hard disk, etc.; and communication device 909.
- the communication means 909 may allow the electronic device 900 to communicate wirelessly or by wire with other devices to exchange data. While FIG. 10 shows an electronic device 900 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
- an embodiment of the present disclosure includes a computer program product comprising a computer program carried on a computer-readable medium, the computer program comprising a method for executing the method shown in each flowchart according to the embodiment of the present disclosure code.
- the computer program may be downloaded and installed from the network via the communication device 909, or from the storage device 908, or from the ROM 902.
- the computer program is executed by the video playback method 901
- the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
- Embodiments of the present disclosure also include a computer program, which, when executed by a processor, is configured to perform the above-mentioned functions defined in the methods of the embodiments of the present disclosure.
- the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
- the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable Read Only Memory (EPROM), flash memory, optical fiber, portable Compact Disc-Read Only Memory (CD-ROM), optical storage device, magnetic storage device, or any suitable of the above The combination.
- a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
- a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport a program for use by or in connection with the instruction execution system, apparatus, or device .
- Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
- the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
- the aforementioned computer-readable medium carries one or more programs, and when the aforementioned one or more programs are executed by the electronic device, causes the electronic device to execute the methods shown in the foregoing embodiments.
- Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, but also conventional Procedural programming language - such as the "C" language or similar programming language.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or media library.
- the remote computer can be connected to the user's computer through any kind of network—including a Local Area Network (LAN) or a Wide Area Network (WAN)—or, can be connected to an external A computer (eg using an internet service provider to connect via the internet).
- LAN Local Area Network
- WAN Wide Area Network
- each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
- the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
- the units involved in the embodiments of the present disclosure may be implemented in a software manner, and may also be implemented in a hardware manner.
- the name of the unit does not constitute a limitation of the unit itself under certain circumstances, for example, the first obtaining unit may also be described as "a unit that obtains at least two Internet Protocol addresses".
- exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (Application Specific Standard Products) Standard Product, ASSP), system on chip (System On Chip, SOC), complex programmable logic device (Complex Programmable Logic Device, CPLD) and so on.
- FPGAs Field Programmable Gate Arrays
- ASICs Application Specific Integrated Circuits
- ASSP Application Specific Standard Products
- ASOC System On Chip
- complex programmable logic device Complex Programmable Logic Device, CPLD
- a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device.
- the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
- Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
- machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM), flash memory, optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
- RAM random access memory
- ROM read only memory
- EPROM erasable programmable read only memory
- flash memory optical fiber
- portable compact disk read only memory (CD-ROM) optical storage devices
- magnetic storage devices or any suitable combination of the foregoing.
- a video playback method includes:
- a target video associated with the target image is acquired, and the target video is played at the display position of the target image in the live-action image.
- the method further includes:
- the information is displayed.
- the method further includes:
- the acquiring the target video associated with the target image includes:
- the target video associated with the target image returned by the server is received.
- the live-action captured image includes multiple target images
- the obtaining the target video associated with the target image, and playing the target video at the display position of the target image in the live-action image includes:
- the target videos associated with the plurality of target images are stored in a preset video playlist;
- each target video in the video playlist determine at least one target video from the video playlist as the target video to be played
- the target video to be played is played.
- the acquisition time of the target video is also stored in the video playlist
- the storing the target videos associated with the plurality of target images in the preset video playlist according to the acquisition sequence of the target videos associated with the plurality of target images including:
- the target videos are stored in the video playlist according to the reverse order of the acquisition time of each target video.
- the playing the target video to be played includes:
- the playing the target video at the display position of the target image in the live-action image includes:
- performing video preprocessing on the target video according to the play area, and playing the target video in the play area includes:
- the display positions include: image edge positions, or/and image vertex positions.
- the playing the target video includes:
- a video playback device includes: an acquisition module, a processing module, and a playback module;
- the acquisition module the user acquires the real scene shooting image
- a processing module configured to detect a target image in the live-action shot image, and determine a display position of the target image in the live-action shot image
- a playing module is configured to acquire a target video associated with the target image, and play the target video at the display position of the target image in the real-life shot image.
- the video playback device further includes: a first interaction module
- the first interaction module is configured to determine the triggered information associated with the target video in response to the user's triggering operation on the target video to be played, so that the information can be displayed by the playing module.
- the video playback device further includes: a second interaction module
- the second interaction module is configured to receive at least one group of target images and target videos to be associated uploaded by the user, and upload the at least one group of target images and target videos to be associated to the server, so that the server can compare the target images and videos to the server.
- the target image to be associated and the target video are associated and stored.
- the playback module when acquiring the target video associated with the target image, is specifically configured to: send the target image to the server, and receive the target video associated with the target image returned by the server.
- the live-action shot image includes multiple target images; the playback module obtains a target video associated with the target image, and displays the target image at the display position of the live-action shot image.
- the target video it is specifically used to: store the target video associated with the plurality of target images in a preset video playlist according to the acquisition sequence of the target video associated with the plurality of target images ; According to the storage order of each target video in the video playlist, determine at least one target video from the video playlist as the target video to be played; In the display of the target image associated with the target video to be played At the position, the target video to be played is played.
- the acquisition time of the target video is also stored in the video playlist; the playback module, according to the acquisition order of the target videos associated with the plurality of target images, will When the video is stored in the preset video playlist, it is specifically used for: storing the target videos in the video playlist according to the reverse order of the acquisition time of each target video.
- the playing module when playing the target video to be played, is specifically configured to: perform decoding processing on each target video to be played, and obtain audio data and video data of each target video to be played; Playing the video data of each target video to be played; and playing the audio data of the target video to be played recently acquired and stored in the video playlist.
- the playing module when the playing module plays the target video at the display position of the target image in the live-action captured image, it is specifically configured to: determine, according to the display position, what is in the live-action captured image. A corresponding play area; video preprocessing is performed on the target video according to the play area, and the target video is played in the play area.
- the playback module when the playback module performs video preprocessing on the target video according to the playback area, and plays the target video in the playback area, the playback module is specifically configured to: perform video preprocessing on the target video according to the playback area. According to the spatial features in the real-life shot images, three-dimensional spatial rendering processing is performed on the video data of the target video, so as to play the target video in the playback area.
- the display positions include: image edge positions, or/and image vertex positions.
- the playback module when playing the target video, is specifically configured to: acquire the playback progress of the target video; and play the target video according to the playback progress.
- an electronic device includes: at least one processor and a memory;
- the memory stores computer-executable instructions
- the at least one processor executes the computer-executable instructions stored in the memory, so that the at least one processor executes the video playback method as described in any preceding item.
- a computer-readable storage medium stores computer-executable instructions, and when a processor executes the computer-executable instructions, the The video playback method described in any preceding item.
- a computer program product includes a computer program that, when executed by a processor, implements the video playback method described in any preceding item.
- a computer program when executed by a processor, is used to implement the video playback method according to any one of the preceding items.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Computer Graphics (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
- Television Signal Processing For Recording (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims (26)
- 一种视频播放方法,其特征在于,包括:获得实景拍摄图像;检测所述实景拍摄图像中的目标图像;确定所述目标图像在所述实景拍摄图像中的显示位置;获取与所述目标图像相关联的目标视频,并在所述目标图像在所述实景拍摄图像中的显示位置处播放所述目标视频。
- 根据权利要求1所述的视频播放方法,其特征在于,还包括:响应用户对播放的目标视频的触发操作,确定被触发的与所述目标视频相关联的信息;对所述信息进行展示。
- 根据权利要求1或2所述的视频播放方法,其特征在于,还包括:接收用户上传的待关联的至少一组目标图像和目标视频;将所述待关联的至少一组目标图像和目标视频上传至服务器,以由所述服务器对所述待关联的目标图像和目标视频进行关联存储。
- 根据权利要求3所述的视频播放方法,其特征在于,所述获取与所述目标图像相关联的目标视频,包括:将所述目标图像发送至所述服务器;接收所述服务器返回的与所述目标图像相关联的目标视频。
- 根据权利要求1-4任一项所述的视频播放方法,其特征在于,所述实景拍摄图像中包括多个目标图像;所述获得与所述目标图像相关联的目标视频,并在所述目标图像在所述实景拍摄图像中的显示位置处播放所述目标视频,包括:按照与所述多个目标图像相关联的目标视频的获取顺序,将与所述多个目标图像相关联的所述目标视频存储在预设的视频播放列表中;根据所述视频播放列表中各目标视频的存储顺序,从所述视频播放列表中确定至少一个目标视频作为待播放的目标视频;在与所述待播放的目标视频相关联的目标图像的显示位置处,播放所述待播放的目标视频。
- 根据权利要求5所述的视频播放方法,其特征在于,所述视频播放列表中还存储有目标视频的获取时间;所述按照与所述多个目标图像相关联的目标视频的获取顺序,将与所述多个目标图像相关联的目标视频存储在预设的视频播放列表中,包括:按照各目标视频的获取时间的倒序,将所述各目标视频存储在所述视频播放列表中。
- 根据权利要求5或6所述的视频播放方法,其特征在于,所述播放所述待播放的目标视频,包括:对各待播放的目标视频进行解码处理,获得所述各待播放的目标视频的音频数据和视频数据;播放所述各待播放的目标视频的视频数据;以及播放最近获取并存储在所述视频播放列表中的待播放的目标视频的音频数据。
- 根据权利要求1-7任一项所述的视频播放方法,其特征在于,所述在所述目标图像在所述实景拍摄图像中的显示位置处播放所述目标视频,包括:根据所述显示位置,确定所述实景拍摄图像中相应的播放区域;根据所述播放区域对所述目标视频进行视频预处理,并在所述播放区域中播放所述目标视频。
- 根据权利要求8所述的视频播放方法,其特征在于,所述根据播放区域对所述目标视频进行视频预处理,并在所述播放区域中播放所述目标视频,包括:根据所述播放区域在所述实景拍摄图像中的空间特征,对所述目标视频的视频数据进行三维空间渲染处理,以在所述播放区域中播放所述目标视频。
- 根据权利要求8或9所述的视频播放方法,其特征在于,所述显示位置包括:图像边缘位置,或/和,图像顶点位置。
- 根据权利要求1-10任一项所述的视频播放方法,其特征在于,所述播放所述目标视频,包括:获取所述目标视频的播放进度;根据所述播放进度播放所述目标视频。
- 一种视频播放装置,其特征在于,包括:获取模块,用户获取实景拍摄图像;处理模块,用于检测所述实景拍摄图像中的目标图像,确定所述目标图像在所述实景拍摄图像中的显示位置;播放模块,用于获取与所述目标图像相关联的目标视频,并在所述目标图像在实景拍摄图像中的显示位置处播放所述目标视频。
- 根据权利要求12所述的视频播放装置,其特征在于,所述视频播放装置还包括第一交互模块,所述第一交互模块用于响应用户对播放的目标视频的触发操作,确定被触发的与所述目标视频相关联的信息;所述播放模块还用于对所述信息进行展示。
- 根据权利要求12或13所述的视频播放装置,其特征在于,所述视频播放装置还包括第二交互模块,所述第二交互模块用于接收用户上传的待关联的至少一组目标图像和目标视频;将所述待关联的至少一组目标图像和目标视频上传至服务器,以由所述服务器对所述待关联的目标图像和目标视频进行关联存储。
- 根据权利要求14所述的视频播放装置,其特征在于,所述播放模块在获取与所述目标图像相关联的所述目标视频时,具体用于:将所述目标图像发送所述至服务器,接收所述服务器返回的与所述目标图像相关联的目标视频。
- 根据权利要求12-15任一项所述的视频播放装置,其特征在于,所述实景拍摄图像中包括多个目标图像;所述播放模块在获得与所述目标图像相关联的所述目标视频,并在所述目标图像在所述实景拍摄图像中的显示位置处播放所述目标视频时,具体用于:按照与所述多个目标图像相关联的目标视频的获取顺序,将与所述多个目标图像相关联的所述目标视频存储在预设的视频播放列表中;根据所述视频播放列表中各目标视频的存储顺序,从所述视频播放列表中确定至少一个目标视频作为待播放的目标视频;在与所述待播放的目标视频相关联的目标图像的显示位置处,播放所述待播放的目标视频。
- 根据权利要求16所述的视频播放装置,其特征在于,所述视频播放列表中还存储有目标视频的获取时间;所述播放模块在按照与所述多个目标图像相关联的目标视频的获取顺序,将与所述多个目标图像相关联的目标视频存储在预设的视频播放列表中时,具体用于:按照各目标视频的获取时间的倒序,将所述各目标视频存储在所述视频播放列表中。
- 根据权利要求16或17所述的视频播放装置,其特征在于,所述播放模块在播放所述待播放的目标视频时,具体用于:对各待播放的目标视频进行解码处理,获得所述各待播放的目标视频的音频数据和视频数据;播放所述各待播放的目标视频的视频数据;以及,播放最近获取并存储在所述视频播放列表中的待播放的目标视频的音频数据。
- 根据权利要求12-18任一项所述的视频播放装置,其特征在于,所述播放模块在所述目标图像在所述实景拍摄图像中的显示位置处播放所述目标视频时,具体用于:根据所述显示位置,确定所述实景拍摄图像中相应的播放区域;根据所述播放区域对所述目标视频进行视频预处理,并在所述播放区域中播放所述目标视频。
- 根据权利要求19所述的视频播放装置,其特征在于,所述播放模块在根据所述播放区域对所述目标视频进行视频预处理,并在所述播放区域中播放所述目标视频时,具体用于:根据所述播放区域在所述实景拍摄图像中的空间特征,对所述目标视频的视频数据进行三维空间渲染处理,以在所述播放区域中播放所述目标视频。
- 根据权利要求19或20所述的视频播放装置,其特征在于,所述显示位置包括:图像边缘位置,或/和,图像顶点位置。
- 根据权利要求12-21任一项所述的视频播放装置,其特征在于,所述播放模块在播放所述目标视频时,具体用于:获取所述目标视频的播放进度;根据所述播放进度播放所述目标视频。
- 一种电子设备,其中,包括:至少一个处理器;以及存储器;所述存储器存储计算机执行指令;所述至少一个处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处理器执行如权利要求1-11任一项所述的视频播放方法。
- 一种计算机可读存储介质,其中,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如权利要求1-11任一项所述的视频播放方法。
- 一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时,实现如权利要求1-11任一项所述的视频播放方法。
- 一种计算机程序,所述计算机程序被处理器执行时,实现如权利要求1-11任一项所述的视频播放方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/250,505 US20240062479A1 (en) | 2020-10-28 | 2021-08-30 | Video playing method and apparatus, electronic device, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011173352.9A CN112288877A (zh) | 2020-10-28 | 2020-10-28 | 视频播放方法、装置、电子设备及存储介质 |
CN202011173352.9 | 2020-10-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022088908A1 true WO2022088908A1 (zh) | 2022-05-05 |
Family
ID=74372374
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/115208 WO2022088908A1 (zh) | 2020-10-28 | 2021-08-30 | 视频播放方法、装置、电子设备及存储介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20240062479A1 (zh) |
CN (1) | CN112288877A (zh) |
WO (1) | WO2022088908A1 (zh) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112288877A (zh) * | 2020-10-28 | 2021-01-29 | 北京字节跳动网络技术有限公司 | 视频播放方法、装置、电子设备及存储介质 |
CN114615426A (zh) * | 2022-02-17 | 2022-06-10 | 维沃移动通信有限公司 | 拍摄方法、装置、电子设备和可读存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018007779A1 (en) * | 2016-07-08 | 2018-01-11 | Sony Interactive Entertainment Inc. | Augmented reality system and method |
CN110809187A (zh) * | 2019-10-31 | 2020-02-18 | Oppo广东移动通信有限公司 | 视频选择方法、视频选择装置、存储介质与电子设备 |
CN111339365A (zh) * | 2018-12-19 | 2020-06-26 | 北京奇虎科技有限公司 | 一种视频展示方法和装置 |
CN111337015A (zh) * | 2020-02-28 | 2020-06-26 | 重庆特斯联智慧科技股份有限公司 | 一种基于商圈聚合大数据的实景导航方法与系统 |
CN111833460A (zh) * | 2020-07-10 | 2020-10-27 | 北京字节跳动网络技术有限公司 | 增强现实的图像处理方法、装置、电子设备及存储介质 |
CN112288877A (zh) * | 2020-10-28 | 2021-01-29 | 北京字节跳动网络技术有限公司 | 视频播放方法、装置、电子设备及存储介质 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013050883A (ja) * | 2011-08-31 | 2013-03-14 | Nintendo Co Ltd | 情報処理プログラム、情報処理システム、情報処理装置および情報処理方法 |
CN103346955B (zh) * | 2013-06-18 | 2016-08-24 | 腾讯科技(深圳)有限公司 | 一种图像处理方法、装置及终端 |
CN105989628A (zh) * | 2015-02-06 | 2016-10-05 | 北京网梯科技发展有限公司 | 通过移动终端获取信息的方法及系统设备 |
CN109168034B (zh) * | 2018-08-28 | 2020-04-28 | 百度在线网络技术(北京)有限公司 | 商品信息显示方法、装置、电子设备和可读存储介质 |
CN111273775A (zh) * | 2020-01-16 | 2020-06-12 | Oppo广东移动通信有限公司 | 增强现实眼镜、基于增强现实眼镜的ktv实现方法与介质 |
-
2020
- 2020-10-28 CN CN202011173352.9A patent/CN112288877A/zh active Pending
-
2021
- 2021-08-30 WO PCT/CN2021/115208 patent/WO2022088908A1/zh active Application Filing
- 2021-08-30 US US18/250,505 patent/US20240062479A1/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018007779A1 (en) * | 2016-07-08 | 2018-01-11 | Sony Interactive Entertainment Inc. | Augmented reality system and method |
CN111339365A (zh) * | 2018-12-19 | 2020-06-26 | 北京奇虎科技有限公司 | 一种视频展示方法和装置 |
CN110809187A (zh) * | 2019-10-31 | 2020-02-18 | Oppo广东移动通信有限公司 | 视频选择方法、视频选择装置、存储介质与电子设备 |
CN111337015A (zh) * | 2020-02-28 | 2020-06-26 | 重庆特斯联智慧科技股份有限公司 | 一种基于商圈聚合大数据的实景导航方法与系统 |
CN111833460A (zh) * | 2020-07-10 | 2020-10-27 | 北京字节跳动网络技术有限公司 | 增强现实的图像处理方法、装置、电子设备及存储介质 |
CN112288877A (zh) * | 2020-10-28 | 2021-01-29 | 北京字节跳动网络技术有限公司 | 视频播放方法、装置、电子设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US20240062479A1 (en) | 2024-02-22 |
CN112288877A (zh) | 2021-01-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022088918A1 (zh) | 虚拟图像的显示方法、装置、电子设备及存储介质 | |
US11265603B2 (en) | Information processing apparatus and method, display control apparatus and method, reproducing apparatus and method, and information processing system | |
CN106803966B (zh) | 一种多人网络直播方法、装置及其电子设备 | |
US20190030441A1 (en) | Using a Portable Device to Interface with a Scene Rendered on a Main Display | |
US9384588B2 (en) | Video playing method and system based on augmented reality technology and mobile terminal | |
US20140240444A1 (en) | Systems and methods for real time manipulation and interaction with multiple dynamic and synchronized video streams in an augmented or multi-dimensional space | |
WO2023051185A1 (zh) | 图像处理方法、装置、电子设备及存储介质 | |
WO2022088908A1 (zh) | 视频播放方法、装置、电子设备及存储介质 | |
WO2021184952A1 (zh) | 增强现实处理方法及装置、存储介质和电子设备 | |
US10560752B2 (en) | Apparatus and associated methods | |
WO2022062643A1 (zh) | 游戏直播互动方法及装置 | |
WO2022007565A1 (zh) | 增强现实的图像处理方法、装置、电子设备及存储介质 | |
CN109600559B (zh) | 一种视频特效添加方法、装置、终端设备及存储介质 | |
CN112291590A (zh) | 视频处理方法及设备 | |
US20190130193A1 (en) | Virtual Reality Causal Summary Content | |
WO2022037484A1 (zh) | 图像处理方法、装置、设备及存储介质 | |
US20180048877A1 (en) | File format for indication of video content | |
WO2022132033A1 (zh) | 基于增强现实的显示方法、装置、设备及存储介质 | |
WO2023103720A1 (zh) | 视频特效处理方法、装置、电子设备及程序产品 | |
US20190075232A1 (en) | Shared experiences in panoramic video | |
US20240155074A1 (en) | Movement Tracking for Video Communications in a Virtual Environment | |
JP2018033107A (ja) | 動画の配信装置及び配信方法 | |
CN107197339B (zh) | 影片弹幕的显示控制方法、装置及头戴式显示设备 | |
WO2022227918A1 (zh) | 视频处理方法、设备及电子设备 | |
CN108985275B (zh) | 增强现实设备及电子设备的显示追踪方法和装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21884640 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18250505 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.08.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21884640 Country of ref document: EP Kind code of ref document: A1 |