CN112738635B - Method and equipment for playing video information - Google Patents

Method and equipment for playing video information Download PDF

Info

Publication number
CN112738635B
CN112738635B CN202011582963.9A CN202011582963A CN112738635B CN 112738635 B CN112738635 B CN 112738635B CN 202011582963 A CN202011582963 A CN 202011582963A CN 112738635 B CN112738635 B CN 112738635B
Authority
CN
China
Prior art keywords
video information
playing
time point
play
target video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011582963.9A
Other languages
Chinese (zh)
Other versions
CN112738635A (en
Inventor
罗剑嵘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhangmen Science and Technology Co Ltd
Original Assignee
Shanghai Zhangmen Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhangmen Science and Technology Co Ltd filed Critical Shanghai Zhangmen Science and Technology Co Ltd
Priority to CN202011582963.9A priority Critical patent/CN112738635B/en
Publication of CN112738635A publication Critical patent/CN112738635A/en
Priority to PCT/CN2021/126321 priority patent/WO2022142642A1/en
Application granted granted Critical
Publication of CN112738635B publication Critical patent/CN112738635B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2387Stream processing in response to a playback request from an end-user, e.g. for trick-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The object of the present application is to provide a method and apparatus for playing video information, comprising: obtaining target video information, wherein the target video information comprises a starting playing point and a stopping playing point; determining a first playing time point and a second playing time point in the target video information; and in the process of playing the target video information, continuing to play the target video information according to a first playing mode when the target video information is played to the first playing time point, and continuing to play the target video information according to a second playing mode when the target video information is played to the second playing time point. The method and the device can realize the cyclic playing of the video content between the first playing time point and the second playing time point which are realized by the user equipment, and realize the special effect of video presentation.

Description

Method and equipment for playing video information
Technical Field
The present application relates to the field of communications, and in particular, to a technique for playing video information.
Background
With the development of the internet, video applications (especially short video applications) are in explosive growth, and people gradually tend to record their work, study and life with videos (e.g. vlog), and the video has the advantages of low production cost, high transmission and production fragmentation, high transmission speed, strong social attribute, simple video shooting operation, but various video technical effects, and a certain technical cost for shooting videos by a photographer in study is required.
Disclosure of Invention
An object of the present application is to provide a method and apparatus for playing video information.
According to one aspect of the present application, there is provided a method for playing video information, applied to a user equipment, the method comprising:
obtaining target video information, wherein the target video information comprises a starting playing point and a stopping playing point;
determining a first playing time point and a second playing time point in the target video information;
and continuing to play the target video information according to a first play mode when the target video information is played to the first play time point, and continuing to play the target video information according to a second play mode when the target video information is played to the second play time point, wherein the first play mode comprises a play sequence from the first play time point to the second play time point, and the second play mode comprises a play sequence from the second play time point to the first play time point.
According to one aspect of the present application, there is provided a user equipment for playing video information, the apparatus comprising:
The system comprises a module, a display module and a display module, wherein the module is used for acquiring target video information, and the target video information comprises a starting play point and a stopping play point;
the first module is used for determining a first playing time point and a second playing time point in the target video information;
and the three modules are used for continuing to play the target video information according to a first playing mode when the target video information is played to the first playing time point and continuing to play the target video information according to a second playing mode when the target video information is played to the second playing time point in the process of playing the target video information, wherein the first playing mode comprises a playing sequence from the first playing time point to the second playing time point, and the second playing mode comprises a playing sequence from the second playing time point to the first playing time point.
According to one aspect of the present application, there is provided an apparatus for playing video information, the apparatus comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the operations of any of the methods described above.
According to another aspect of the present application, there is provided a computer readable medium storing instructions that, when executed, cause a system to perform the operations of any of the methods described above.
According to one aspect of the present application, there is provided a computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the steps of the method as described above.
Compared with the prior art, the user equipment in the application acquires the target video information, automatically realizes that the target video information is continuously played according to the first playing mode when the target video information is played to the first playing time point in the process of playing the target video information after the first playing time point and the second playing time point in the target video information are determined, and continuously plays the target video information according to the second playing mode when the target video information is continuously played to the second playing time point. The method and the device automatically generate special playing effects of the video (for example, realizing special effects such as floating or ghost, and the like), thereby reducing the technical cost of the user for learning to shoot the video, and simultaneously presenting different video playing effects to the user, so that the video watching experience of the user is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings, in which:
FIG. 1 shows a schematic diagram according to the present application;
FIG. 2 shows a flow chart of a method for playing video information, applied to a user device, according to one embodiment of the present application;
FIG. 3 illustrates a flow chart of a method for playing video information according to another embodiment of the present application;
FIG. 4 illustrates a device configuration diagram of a user device for playing video information according to one embodiment of the present application;
FIG. 5 illustrates an exemplary system that may be used to implement various embodiments described in the present invention.
The same or similar reference numbers in the drawings refer to the same or similar parts.
Detailed Description
The present application is described in further detail below with reference to the accompanying drawings.
In one typical configuration of the present application, the terminal, the device of the service network, and the trusted party each include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
The device referred to in the present application includes, but is not limited to, a user device, a network device, or a device formed by integrating a user device and a network device through a network. The user equipment includes, but is not limited to, any mobile electronic product which can perform man-machine interaction with a user (such as man-machine interaction through a touch pad), for example, a smart phone, a tablet computer and the like, and the mobile electronic product can adopt any operating system, for example, an android operating system, an iOS operating system and the like. The network device comprises an electronic device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and the hardware of the electronic device comprises, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device and the like. The network device includes, but is not limited to, a computer, a network host, a single network server, a plurality of network server sets, or a cloud of servers; here, the Cloud is composed of a large number of computers or network servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, a virtual supercomputer composed of a group of loosely coupled computer sets. Including but not limited to the internet, wide area networks, metropolitan area networks, local area networks, VPN networks, wireless Ad Hoc networks (Ad Hoc networks), and the like. Preferably, the device may be a program running on the user device, the network device, or a device formed by integrating the user device and the network device, the touch terminal, or the network device and the touch terminal through a network.
Of course, those skilled in the art will appreciate that the above-described devices are merely examples, and that other devices now known or hereafter may be present as appropriate for the application, are intended to be within the scope of the present application and are incorporated herein by reference.
In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Fig. 1 shows a typical scenario of the present application, where a user holds a user device, and captures a jump video of a person (for example, video content that jumps from the ground and then falls to the ground, and naturally includes some frames before the jump and frames after the landing) through the user device, where a video duration of the video is 1min, and after the user device acquires the video, first and second play time points in the video are first determined by the user device at random, for example, the first and second play time points are manually set after the user completes capturing, for example, the first play time point is a time point when the person jumps to the highest point (for example, F point 0:48), and the second play time point is any time point when the person falls to the highest point (for example, a corresponding time point B point 0:50 when the person falls to the middle height). After determining the first play time point and the second play time point, the user equipment plays the video from the initial play point (0:00) of the video according to the default play speed of the video, when the video is to be played to 0:48, the user equipment plays the video at a speed smaller than the default play speed, when the video is to be played to 0:48, the user equipment enables the first play mode (for example, only continues to play for a period from the first play time point to the second play time point), when the first play mode is enabled to 0:50, the user equipment enables the first play mode (for example, only continues to play for a period from the second play time point to the first play time point), when the video is to be played to 0:48, the user equipment continues to enable the second play mode when the video is to be played to 0:50, and so on, namely, the user equipment performs cyclic motion between the points F and B.
Fig. 2 shows a method for playing video information according to an embodiment of the present application, applied to a user equipment, the method including steps S101, S102 and S103.
Specifically, in step S101, the user equipment obtains target video information, where the target video information includes a start play point and an end play point. For example, the target video information may be a short video (e.g., a video duration less than 5 minutes) or a regular video (e.g., without limiting the duration of the video), and in some embodiments, the target video information has a duration greater than a first duration threshold. For example, the subsequent operation can be performed only on the premise that the target video information guarantees a certain video duration, so that the presentation of the target video information after subsequent processing can be captured by human eyes, wherein the first duration threshold is set by default by the user equipment, and if the duration of the video acquired by the current user equipment does not meet the first duration threshold (for example, 20 s), the target video information cannot be used. Wherein, the initial play point of the target video information is 0:00, of course, the starting time point of the target video may be the default 0:00, also can be the user-defined play time point after the user adjustment. In some embodiments, a target application is installed in the user equipment, wherein the target application comprises a video-type application or an application with video functions, the user uploads target video information to the target application, and the user equipment acquires the target video information based on the target application.
In some embodiments, in step S101, the user equipment acquires one or more candidate video information uploaded by the user, selects one candidate video information from the one or more candidate video information, detects whether the candidate video information meets a preset condition, and if yes, determines that the candidate video information is target video information. For example, a user captures or otherwise obtains one or more candidate video information and uploads the one or more candidate video information (via a target application) to the user device. The user equipment randomly selects one candidate video information from the one or more candidate video information (for example, V1, V2 and V3 … Vn), the user equipment detects whether the candidate video information meets a preset condition, if not, continues randomly selecting one candidate video information from the one or more candidate video information, and detects whether the candidate video information meets the preset condition, if not, continues executing the steps until one candidate video information is obtained to meet the preset condition, and takes the candidate video information as target video information.
In some embodiments, the obtaining one or more candidate video information uploaded by the user, selecting a candidate video information from the one or more candidate video information, detecting whether the candidate video information meets a preset condition, and if yes, determining that the candidate video information is target video information includes: and acquiring one or more candidate video information uploaded by a user, acquiring the arrangement sequence of the one or more candidate video information, traversing the one or more candidate video information according to the arrangement sequence in turn, detecting whether the candidate video information meets a preset condition in turn, until detecting that one candidate video information meets the preset condition, and determining the candidate video information as target video information. For example, the user equipment uploads one or more candidate video information to the user equipment, which is arranged according to a certain order (for example, ID1, ID2, ID3 and … are arranged in turn according to the identification of each candidate video information), the user equipment starts to detect from the candidate video information ID1, if the candidate video information does not meet the preset condition, the user equipment sequentially selects the candidate video information ID2 and starts to detect, if the candidate video information does not meet the preset condition, continues to detect the subsequent candidate video information according to the order until it is detected that one candidate video information meets the preset condition, and takes the candidate video information as the target video information. In some embodiments, if the user equipment detects that at least one candidate video information satisfies the preset condition in the one or more candidate video information, the at least one candidate video information may be presented to the user for the user to determine the target video information, or the user equipment takes the detected first candidate video information satisfying the preset condition as the target video, and after the target video is played in the subsequent method according to the present scheme, the user equipment continues to acquire the next target video for playing.
In some embodiments, the preset conditions include at least any one of:
1) The video content comprises biological information;
2) The video content comprises motion trail change information;
3) The video content comprises object information;
4) The video content is non-pure lecture content.
For example, the biological information includes a person or an animal, the object information may include a ball, a water cup, a water flow, or any other visible object, and the video content needs to include a picture of the aforementioned movement track generated by the person or the animal, for example, the movement track generated by jumping, running, falling, or the like of the person or the animal, the movement track generated by throwing, falling, or the like of the object, or the movement track generated in space, i.e., the movement track of the person/the object in the video content is obvious, rather than static (for example, a building, a planted tree, or the like). In some embodiments, if only the content of the lecture class is in a video content (for example, only the sound of the lecture appears in the video frame), the user device determines that the video class cannot be used as the target video information. In some embodiments, if a video content is a content of a lecture class, and the lecture object is rich in language and expression, the video may be used as target video information to some extent. In some embodiments, if video content includes lecture-like content and there is a motion profile of an object or living being (e.g., a person), such video may be used as target video information.
In step S102, the user equipment determines a first play time point and a second play time point in the target video information. For example, the user equipment may automatically determine the first playing time point and the second playing time point according to the video content of the target video information, or determine the first playing time point and the second playing time point according to the user's custom mark when uploading the target video information.
In some embodiments, the determining the first play time point and the second play time point in the target video information includes: and determining a first playing time point and a second playing time point according to the manual annotation information in the target video information. For example, when uploading the target video information, the user manually adjusts the progress bar in the target video information to configure two time points, after the target video information is uploaded into the target application, the user equipment determines a first playing time point and a second playing time point according to the configuration information uploaded by the user, for example, the user drags the progress bar from the starting time point to 0:10, manually sets the subsequent point as the first playing time point, continues to drag the progress bar to 0:23, manually sets the subsequent point as the second playing time point, then uploads the target video, and transmits the two settings as configuration information together with the target video. The user equipment determines 0:10 as a first playing time point and 0:23 as a second playing time point based on the configuration information. The user self-defines a first playing time point and a second playing time point, so that the user is given an independent creating experience, and the using experience of the user is improved.
In some embodiments, the determining the first play time point and the second play time point in the target video information includes: acquiring one or more video frames in the target video information, selecting a video frame, and determining a time point corresponding to the video frame as a first playing time point if the distance between the biological position and the bottom in the video frame is greater than a first distance threshold; and selecting a video frame, and determining a time point corresponding to the video frame as a second playing time point if the distance between the biological position and the bottom in the video frame is smaller than a second distance threshold, wherein the distance between the biological position and the bottom in the video frame corresponding to any time point from the first playing time point to the second playing time point is smaller than the first distance threshold and larger than the second distance threshold. For example, the target video information includes content of motion of a living being (for example, an animal or a person), one or more video frames of the target video information each include the living being, and in each video frame, position information of the living being is inconsistent with position information in other video frames. The user equipment arranges each video frame according to the time sequence of the progress bar, takes the bottom horizontal direction of each video frame as a reference, acquires the vertical distance between the biological position in each video frame and the bottom horizontal direction, and takes the time point corresponding to the video frame as a first playing time point (for example, the time point corresponding to the highest position when the user equipment selects biological motion) if the vertical distance between the biological position in one video frame is larger than a first distance threshold; if the vertical distance of the biological position in the video frame is smaller than the second distance threshold, the user equipment takes the time point corresponding to the video frame as a second playing time point (for example, the user equipment selects the time point corresponding to the lower position when the biological moves, for example, the lower position is not the lowest vertical distance point of the biological position). In some embodiments, in the video frame from the first playing time point to the second playing time point, the vertical distance between the biological position and the bottom horizontal direction in each playing time point is located between the first distance threshold and the second distance threshold, that is, from the first playing time point to the second playing time point, the distance between the biological position and the horizontal direction is smaller. Thereby providing a basis for generating a floating video effect after the first play mode and the second play mode are subsequently triggered. Or in some embodiments, the user equipment may use the vertical direction of each video frame as a reference, obtain a horizontal distance between the biological position in each video frame and the vertical direction, and if the horizontal distance between the biological position in a video frame is greater than a first horizontal distance threshold, the user equipment uses a time point corresponding to the video frame as a first playing time point; if the horizontal distance of the biological position in the video frame is smaller than the second horizontal distance threshold, the user equipment takes the time point corresponding to the video frame as a second playing time point.
In some embodiments, the determining the first play time point and the second play time point in the target video information includes: matching and inquiring the audio information corresponding to each time point in the target video information in a first streaming voice library, and determining the time point as a second playing time point if the audio information corresponding to the time point is successfully matched; and determining the initial sounding time point of the person corresponding to the audio information as a first playing time point. For example, if the video content of the target video information mainly includes content of a speaking category (for example, a moving picture of a speaker or other creatures is further included), the user device splits the total progress of the target video information into a plurality of time test periods (for example, total duration is 0-60s, a plurality of time test periods are 0-10s, 5-10s, 10-20s, and so on), for the plurality of test periods, the user device takes each test period as a time point, sequentially obtains audio information in each time period, performs a matching query on each piece of audio information in the first streaming voice library to detect whether the audio information is a network popular word (for example, "true", "is no pain of your mind.
In step S103, during the process of playing the target video information, the user equipment continues to play the target video information according to a first playing mode when playing to the first playing time point, and continues to play the target video information according to a second playing mode when continuing to play to the second playing time point, where the first playing mode includes a playing sequence from the first playing time point to the second playing time point, and the second playing mode includes a playing sequence from the second playing time point to the first playing time point. For example, the user equipment follows the progress bar of the target video information from 0:00, in some embodiments, the method further includes step S104 (not shown), in which in the process of playing the target video information, if a duration difference between a time point corresponding to a current playing progress of the target video information and the first playing time point is smaller than a third duration threshold, the user equipment plays the target video information at a first playing speed until the first playing time point is reached, where the first playing speed is smaller than a default playing speed of the target video information. For example, when the playing of the target video information is about to reach the first playing point in time, for example, the video frame corresponding to the first playing point in time is not yet played currently, the time difference between the time point corresponding to the current playing video progress and the first playing point in time is smaller than the third time threshold, and the user equipment determines that the playing is about to reach the first playing point in time, and when the target video information is played in the period from the time point corresponding to the current playing video progress to the first playing point in time, the user equipment gradually slows down the playing speed of the target video information (for example, the playing speed gradually generates a speed difference from a default playing speed and increases the difference value of the speeds). When the target video information is played at a progressively slower rate to a first point in time, the user device continues to play the target video information in accordance with the first play mode, which, in some embodiments,
The first play mode further includes playing at a second play speed, wherein the second play speed is generated by at least one play multiplier. For example, when playing to the first playing time point, the user equipment plays the target video information in the order of playing from the first playing time point to the second playing time point, whether the first playing time point is before the second playing time point (for example, the first playing time point is 0:22, the second playing time point is 0:28) or after the second playing time point (for example, the first playing time point is 0:22, the second playing time point is 0:19, that is, the reverse order playing in the present case). And in the process from the first playing time point to the second playing time point, the playing speed of the target video information is a second playing speed, and the second speed is not a fixed speed, in the process, the playing multiple preset by the user equipment is generated (for example, 0.25 times, 0.5 times, 0.75 times, 1.25 times, 1.5 times, 2 times, etc.), wherein the playing multiple is used for generating the second speed in combination with the playing speed (for example, V1) when the target video information is played to the first playing time point, and in the process, the playing multiple can be any one preset by the user equipment, for example, in the process from the first playing time point to the second playing time point, the playing speed (for example, V1) when the target video information is played to the first playing time point is changed from slow speed to slow speed, for example, V1 can be combined with the low playing multiple (for example, 0.15 times, 0.25 times and then combined with the low playing multiple gradually, then the playing multiple is combined with the high playing speed (for example, V1) when the second playing multiple is combined with the second playing multiple, and the second playing mode is further performed, and the playing mode is further sequentially performed according to the first and the second playing speed and the second playing mode, and the first playing mode is further performed. For example, if the first play time point is before the second play time point (for example, the first play time point is 0:22, and the second play time point is 0:28), the second play mode includes playing in reverse order from the second play time point to the first play time point, if the first play time point is after the second play time point (for example, the first play time point is 0:22, and the second play time point is 0:19), the second play mode includes playing in positive order from the second play time point to the first play time point, and in the process of playing from the second play time point to the first play time point, the playing speed of the target video information is a third playing speed, and the third speed is not a fixed speed, the process is generated by a preset playing multiple of the user equipment (for example, 0.25 times, 0.5 times, 0.75 times, 1.25 times, 1.5 times, 2 times, etc.), wherein the play multiple is used to generate a third speed in combination with the play speed (e.g., V2) when the target video information is played to the second play time point, and the play multiple may be any one preset by the user equipment in the process, for example, in the process, the user equipment changes the play speed (e.g., V2) of the target video information at the second play time point from slow speed to slow, for example, V2 may be gradually combined with the low play multiple (e.g., first combined by 0.15 times, then combined by 0.25 times), and then the speed at that time is gradually combined with the high play multiple (e.g., first combined by 0.5 times, then combined by 1.2 times sequentially increasing.
And then, the target video information is played again to a first playing time point, so that the user equipment is continuously triggered to continuously play the target video information according to the first playing mode, when the target video information is played to a second playing time point according to the first playing mode, the user equipment is continuously triggered to continuously play the target video information according to the second playing mode, and so on, and the video content of the time content is continuously and circularly played between the first playing time point and the second playing time point.
In some embodiments, the time difference information between the second play time point and the first play time point is greater than a second duration threshold. For example, a playing time exists between the second playing time point and the first playing time point, and the duration is greater than the second duration threshold value, so that time is available for fully realizing the first playing mode and the second playing mode in the time content. The second playing time point may be later than the first playing time point, or the second playing time point may be earlier than the first playing time point. And when the second playing time point is later than the first playing time point and the user equipment starts to play the target video information, playing the target video information to the first playing time point, and playing the target video information to the second playing time point according to a first playing mode, wherein the video clip between the first playing time point and the second playing time point is not played. When the second playing time point is earlier than the first playing time point and the user equipment starts to play the target video information, playing the target video information to the first playing time point, and playing the target video information to the second playing time point according to a first playing mode, wherein the video clip between the first playing time point and the second playing time point is already played.
In some embodiments, the method further includes step S105 (not shown), and in step S105, when the number of times the target video information is played according to the second play mode is greater than a first number threshold, and when the playing is continued to the second play time point, the user equipment continues to play the target video information in the order from the second play time point to the termination play point. For example, when the target video information is played for multiple times in the first playing mode and the second playing mode between the first playing time point and the second playing time point, if the number of times of playing in the second playing mode is greater than the first number threshold, the user equipment determines that the special effect of the target video information is already achieved currently, and then plays the subsequent video content immediately, that is, starts playing from the second playing time point to the ending playing point of the target video information according to the progress bar.
Fig. 3 is a flowchart of a method for playing video information according to another embodiment of the present application, where a user device obtains a video shot by a user (e.g., an action of a person jumping from a place), and the user device presets a play speed from the play high point to the play low point (e.g., a play speed curve controls a play speed) according to a play high point (e.g., a progress point corresponding to a highest height to which the person jumps) and a play low point (e.g., a progress point corresponding to a certain height to which the user device randomly selects a person) selected by the user device. After the configuration is completed, the user equipment starts to play the video from the starting point, and the video is circularly played between the high point and the bottom point according to the preset playing speed when the video is played to the high point, so that the floating effect can be achieved.
Fig. 4 shows a user device for playing video information according to one embodiment of the present application, the user device comprising a one-to-one module 101, a two-to-two module 102 and a three-to-three module 103.
Specifically, a module 101 is configured to obtain target video information, where the target video information includes a start play point and an end play point. For example, the target video information may be a short video (for example, the duration of the video is less than 5 min), or may be a normal video (for example, the duration of the video is not limited).
A second module 102, configured to determine a first play time point and a second play time point in the target video information. For example, the user equipment may automatically determine the first playing time point and the second playing time point according to the video content of the target video information, or determine the first playing time point and the second playing time point according to the user's custom mark when uploading the target video information.
And a third module 103, configured to continue playing the target video information according to a first playing mode when playing to the first playing time point, and continue playing the target video information according to a second playing mode when playing to the second playing time point in the process of playing the target video information, where the first playing mode includes a playing sequence from the first playing time point to the second playing time point, and the second playing mode includes a playing sequence from the second playing time point to the first playing time point. For example, the user equipment follows the progress bar of the target video information from 0:00 starts playing.
In some embodiments, the target video information has a duration greater than a first duration threshold. The related operations are the same as or similar to those of the embodiment shown in fig. 2, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, a module 101 is used for
And acquiring one or more candidate video information uploaded by a user, selecting one candidate video information from the one or more candidate video information, detecting whether the candidate video information meets a preset condition, and if so, determining the candidate video information as target video information. The related operations are the same as or similar to those of the embodiment shown in fig. 2, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the obtaining one or more candidate video information uploaded by the user, selecting a candidate video information from the one or more candidate video information, detecting whether the candidate video information meets a preset condition, and if yes, determining that the candidate video information is target video information includes:
and acquiring one or more candidate video information uploaded by a user, acquiring the arrangement sequence of the one or more candidate video information, traversing the one or more candidate video information according to the arrangement sequence in turn, detecting whether the candidate video information meets a preset condition in turn, until detecting that one candidate video information meets the preset condition, and determining the candidate video information as target video information. The related operations are the same as or similar to those of the embodiment shown in fig. 2, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the preset conditions include at least any one of:
the video content comprises biological information;
the video content comprises motion trail change information;
the video content comprises object information;
the video content is non-pure lecture content;
the related operations are the same as or similar to those of the embodiment shown in fig. 2, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the time difference information between the second play time point and the first play time point is greater than a second duration threshold. The related operations are the same as or similar to those of the embodiment shown in fig. 2, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the determining the first play time point and the second play time point in the target video information includes:
and determining a first playing time point and a second playing time point according to the manual annotation information in the target video information. The related operations are the same as or similar to those of the embodiment shown in fig. 2, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the determining the first play time point and the second play time point in the target video information includes:
Acquiring one or more video frames in the target video information, selecting a video frame, and determining a time point corresponding to the video frame as a first playing time point if the distance between the biological position and the bottom in the video frame is greater than a first distance threshold;
and selecting a video frame, and determining a time point corresponding to the video frame as a second playing time point if the distance between the biological position and the bottom in the video frame is smaller than a second distance threshold, wherein the distance between the biological position and the bottom in the video frame corresponding to any time point from the first playing time point to the second playing time point is smaller than the first distance threshold and larger than the second distance threshold. The related operations are the same as or similar to those of the embodiment shown in fig. 2, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the determining the first play time point and the second play time point in the target video information includes:
matching and inquiring the audio information corresponding to each time point in the target video information in a first streaming voice library, and determining the time point as a second playing time point if the audio information corresponding to the time point is successfully matched;
And determining the initial sounding time point of the person corresponding to the audio information as a first playing time point. The related operations are the same as or similar to those of the embodiment shown in fig. 2, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the network device further includes a quad module 104 (not shown), a quad module 104 for
And in the process of playing the target video information, if the time difference between the time point corresponding to the current playing progress of the target video information and the first playing time point is smaller than a third time threshold, playing the target video information according to a first playing speed until the first playing time point is reached, wherein the first playing speed is smaller than the default playing speed of the target video information. The related operations are the same as or similar to those of the embodiment shown in fig. 2, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the first play mode further comprises following
And playing at a second playing speed, wherein the second playing speed is generated by at least one playing speed.
The related operations are the same as or similar to those of the embodiment shown in fig. 2, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the second play mode further comprises following
And playing at a third playing speed, wherein the third playing speed is generated by at least one playing speed.
The related operations are the same as or similar to those of the embodiment shown in fig. 2, and thus are not described in detail herein, and are incorporated by reference.
In some embodiments, the ue further comprises a five module 105 (not shown), a five module 105 for
And when the number of times of playing the target video information according to the second playing mode is larger than a first quantity threshold, and when the target video information is continuously played to the second playing time point, continuously playing the target video information according to the sequence from the second playing time point to the ending playing point.
The present application also provides a computer readable storage medium storing computer code which, when executed, performs a method as claimed in any preceding claim.
The present application also provides a computer program product which, when executed by a computer device, performs a method as claimed in any preceding claim.
The present application also provides a computer device comprising:
One or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
FIG. 5 illustrates an exemplary system that may be used to implement various embodiments described herein;
in some embodiments, as shown in fig. 5, the system 300 can function as any of the devices of the various described embodiments. In some embodiments, system 300 can include one or more computer-readable media (e.g., system memory or NVM/storage 320) having instructions and one or more processors (e.g., processor(s) 305) coupled with the one or more computer-readable media and configured to execute the instructions to implement the modules to perform the actions described herein.
For one embodiment, the system control module 310 may include any suitable interface controller to provide any suitable interface to at least one of the processor(s) 305 and/or any suitable device or component in communication with the system control module 310.
The system control module 310 may include a memory controller module 330 to provide an interface to the system memory 315. Memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
The system memory 315 may be used, for example, to load and store data and/or instructions for the system 300. For one embodiment, system memory 315 may include any suitable volatile memory, such as, for example, a suitable DRAM. In some embodiments, the system memory 315 may comprise a double data rate type four synchronous dynamic random access memory (DDR 4 SDRAM).
For one embodiment, system control module 310 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage 320 and communication interface(s) 325.
For example, NVM/storage 320 may be used to store data and/or instructions. NVM/storage 320 may include any suitable nonvolatile memory (e.g., flash memory) and/or may include any suitable nonvolatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 320 may include storage resources that are physically part of the device on which system 300 is installed or which may be accessed by the device without being part of the device. For example, NVM/storage 320 may be accessed over a network via communication interface(s) 325.
Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device. The system 300 may wirelessly communicate with one or more components of a wireless network in accordance with any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 305 may be packaged together with logic of one or more controllers (e.g., memory controller module 330) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be packaged together with logic of one or more controllers of the system control module 310 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die as logic of one or more controllers of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic of one or more controllers of the system control module 310 to form a system on chip (SoC).
In various embodiments, the system 300 may be, but is not limited to being: a server, workstation, desktop computing device, or mobile computing device (e.g., laptop computing device, handheld computing device, tablet, netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or different architectures. For example, in some embodiments, system 300 includes one or more cameras, keyboards, liquid Crystal Display (LCD) screens (including touch screen displays), non-volatile memory ports, multiple antennas, graphics chips, application Specific Integrated Circuits (ASICs), and speakers.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, using Application Specific Integrated Circuits (ASIC), a general purpose computer or any other similar hardware device. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions as described above. Likewise, the software programs of the present application (including associated data structures) may be stored on a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. In addition, some steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
Furthermore, portions of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application by way of operation of the computer. Those skilled in the art will appreciate that the form of computer program instructions present in a computer readable medium includes, but is not limited to, source files, executable files, installation package files, etc., and accordingly, the manner in which the computer program instructions are executed by a computer includes, but is not limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. Herein, a computer-readable medium may be any available computer-readable storage medium or communication medium that can be accessed by a computer.
Communication media includes media whereby a communication signal containing, for example, computer readable instructions, data structures, program modules, or other data, is transferred from one system to another. Communication media may include conductive transmission media such as electrical cables and wires (e.g., optical fibers, coaxial, etc.) and wireless (non-conductive transmission) media capable of transmitting energy waves, such as acoustic, electromagnetic, RF, microwave, and infrared. Computer readable instructions, data structures, program modules, or other data may be embodied as a modulated data signal, for example, in a wireless medium, such as a carrier wave or similar mechanism, such as that embodied as part of spread spectrum technology. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The modulation may be analog, digital or hybrid modulation techniques.
By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media include, but are not limited to, volatile memory, such as random access memory (RAM, DRAM, SRAM); and nonvolatile memory such as flash memory, various read only memory (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, feRAM); and magnetic and optical storage devices (hard disk, tape, CD, DVD); or other now known media or later developed computer-readable information/data that can be stored for use by a computer system.
An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to operate a method and/or a solution according to the embodiments of the present application as described above.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the apparatus claims can also be implemented by means of one unit or means in software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.

Claims (10)

1. A method for playing video information, applied to a user equipment, wherein the method comprises:
acquiring target video information, wherein the target video information comprises a starting play point and a stopping play point, the duration of the target video information is longer than a first time threshold, and if the duration of the video acquired by the current user equipment does not meet the first time threshold, the video cannot be used as the target video information;
determining a first playing time point and a second playing time point in the target video information, wherein the time difference information between the second playing time point and the first playing time point is larger than a second duration threshold value, and the second duration threshold value is used for realizing a first playing mode and a second playing mode;
continuing to play the target video information according to a first play mode when the target video information is played to the first play time point, and continuing to play the target video information according to a second play mode when the target video information is played to the second play time point, wherein the first play mode comprises a play sequence from the first play time point to the second play time point, and the second play mode comprises a play sequence from the second play time point to the first play time point;
Wherein the determining the first playing time point and the second playing time point in the target video information includes:
acquiring one or more video frames in the target video information, selecting a video frame, and determining a time point corresponding to the video frame as a first playing time point if the distance between the biological position and the bottom in the video frame is greater than a first distance threshold; selecting a video frame, and determining a time point corresponding to the video frame as a second playing time point if the distance between the biological position and the bottom in the video frame is smaller than a second distance threshold, wherein the distance between the biological position and the bottom in the video frame corresponding to any time point from the first playing time point to the second playing time point is smaller than the first distance threshold and larger than the second distance threshold; or,
matching and inquiring the audio information corresponding to each time point in the target video information in a first streaming voice library, and determining the time point as a second playing time point if the audio information corresponding to the time point is successfully matched; and determining the initial sounding time point of the person corresponding to the audio information as a first playing time point.
2. The method of claim 1, wherein the obtaining a target video information, wherein the target video information includes a start play point and an end play point includes:
and acquiring one or more candidate video information uploaded by a user, selecting one candidate video information from the one or more candidate video information, detecting whether the candidate video information meets a preset condition, and if so, determining the candidate video information as target video information.
3. The method of claim 2, wherein the obtaining one or more candidate video information uploaded by the user, selecting a candidate video information from the one or more candidate video information, detecting whether the candidate video information meets a preset condition, and if yes, determining that the candidate video information is target video information, includes:
and acquiring one or more candidate video information uploaded by a user, acquiring the arrangement sequence of the one or more candidate video information, traversing the one or more candidate video information according to the arrangement sequence in turn, detecting whether the candidate video information meets a preset condition in turn, until detecting that one candidate video information meets the preset condition, and determining the candidate video information as target video information.
4. A method according to claim 3, wherein the preset conditions comprise at least any one of:
the video content comprises biological information;
the video content comprises motion trail change information;
the video content comprises object information;
the video content is non-pure lecture content.
5. The method of claim 1, wherein the method further comprises:
and in the process of playing the target video information, if the time difference between the time point corresponding to the current playing progress of the target video information and the first playing time point is smaller than a third time threshold, playing the target video information according to a first playing speed until the first playing time point is reached, wherein the first playing speed is smaller than the default playing speed of the target video information.
6. The method of claim 1, wherein the first play mode further comprises playing at a second play speed, wherein the second play speed is generated by at least one play multiplier.
7. The method of claim 1, wherein the second play mode further comprises playing at a third play speed, wherein the third play speed is generated by at least one play multiplier.
8. The method of claim 1, wherein the method further comprises:
and when the number of times of playing the target video information according to the second playing mode is larger than a first quantity threshold, and when the target video information is continuously played to the second playing time point, continuously playing the target video information according to the sequence from the second playing time point to the ending playing point.
9. An apparatus for playing video information, wherein the apparatus comprises:
a processor; and
a memory arranged to store computer executable instructions which, when executed, cause the processor to perform the method of any one of claims 1 to 8.
10. A computer readable medium storing instructions that, when executed, cause a computer to perform the operations of the method of any one of claims 1 to 8.
CN202011582963.9A 2020-12-28 2020-12-28 Method and equipment for playing video information Active CN112738635B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011582963.9A CN112738635B (en) 2020-12-28 2020-12-28 Method and equipment for playing video information
PCT/CN2021/126321 WO2022142642A1 (en) 2020-12-28 2021-10-26 Method and device for playing back video information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011582963.9A CN112738635B (en) 2020-12-28 2020-12-28 Method and equipment for playing video information

Publications (2)

Publication Number Publication Date
CN112738635A CN112738635A (en) 2021-04-30
CN112738635B true CN112738635B (en) 2023-05-05

Family

ID=75606960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011582963.9A Active CN112738635B (en) 2020-12-28 2020-12-28 Method and equipment for playing video information

Country Status (2)

Country Link
CN (1) CN112738635B (en)
WO (1) WO2022142642A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112738635B (en) * 2020-12-28 2023-05-05 上海掌门科技有限公司 Method and equipment for playing video information

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005062614A1 (en) * 2003-12-19 2005-07-07 Mitsubishi Denki Kabushiki Kaisha Video data processing method and vide data processing device
CN111615002A (en) * 2020-04-30 2020-09-01 腾讯科技(深圳)有限公司 Video background playing control method, device and system and electronic equipment
CN111866549A (en) * 2019-04-29 2020-10-30 腾讯科技(深圳)有限公司 Video processing method and device, terminal and storage medium
CN111901615A (en) * 2020-06-28 2020-11-06 北京百度网讯科技有限公司 Live video playing method and device
CN112040280A (en) * 2020-08-20 2020-12-04 连尚(新昌)网络科技有限公司 Method and equipment for providing video information

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI482486B (en) * 2013-01-30 2015-04-21 Wistron Corp Preview and playback method of video streams and system thereof
CN104144318A (en) * 2013-05-08 2014-11-12 北京航天长峰科技工业集团有限公司 Video inverted-order playback and rapid positioning method based on HIKVISION DVR
US9792957B2 (en) * 2014-10-08 2017-10-17 JBF Interlude 2009 LTD Systems and methods for dynamic video bookmarking
US10582265B2 (en) * 2015-04-30 2020-03-03 JBF Interlude 2009 LTD Systems and methods for nonlinear video playback using linear real-time video players
CN107438204B (en) * 2017-07-26 2019-12-17 维沃移动通信有限公司 Method for circularly playing media file and mobile terminal
CN107509110A (en) * 2017-08-03 2017-12-22 乐蜜有限公司 A kind of loop play method and device of video file
CN107801100A (en) * 2017-09-27 2018-03-13 北京潘达互娱科技有限公司 A kind of video location player method and device
CN110475148A (en) * 2019-08-13 2019-11-19 北京奇艺世纪科技有限公司 Video broadcasting method, device and electronic equipment
CN111010619A (en) * 2019-12-05 2020-04-14 北京奇艺世纪科技有限公司 Method, apparatus, computer device and storage medium for processing short video data
CN111031393A (en) * 2019-12-26 2020-04-17 广州酷狗计算机科技有限公司 Video playing method, device, terminal and storage medium
CN111414147A (en) * 2020-03-26 2020-07-14 广州酷狗计算机科技有限公司 Song playing method, device, terminal and storage medium
CN111526426A (en) * 2020-04-24 2020-08-11 Oppo广东移动通信有限公司 Video playing method, terminal equipment and storage medium
CN111859009A (en) * 2020-08-20 2020-10-30 连尚(新昌)网络科技有限公司 Method and equipment for providing audio information
CN112738635B (en) * 2020-12-28 2023-05-05 上海掌门科技有限公司 Method and equipment for playing video information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005062614A1 (en) * 2003-12-19 2005-07-07 Mitsubishi Denki Kabushiki Kaisha Video data processing method and vide data processing device
CN111866549A (en) * 2019-04-29 2020-10-30 腾讯科技(深圳)有限公司 Video processing method and device, terminal and storage medium
CN111615002A (en) * 2020-04-30 2020-09-01 腾讯科技(深圳)有限公司 Video background playing control method, device and system and electronic equipment
CN111901615A (en) * 2020-06-28 2020-11-06 北京百度网讯科技有限公司 Live video playing method and device
CN112040280A (en) * 2020-08-20 2020-12-04 连尚(新昌)网络科技有限公司 Method and equipment for providing video information

Also Published As

Publication number Publication date
CN112738635A (en) 2021-04-30
WO2022142642A1 (en) 2022-07-07

Similar Documents

Publication Publication Date Title
CN108462895A (en) Sound effect treatment method, device and machine readable media
CN112822431B (en) Method and equipment for private audio and video call
US20170195614A1 (en) Method and electronic device for playing video
CN105141587A (en) Virtual doll interaction method and device
CN112685121B (en) Method and equipment for presenting session entry
CN112040280A (en) Method and equipment for providing video information
CN112214678A (en) Method and device for recommending short video information
US20230307004A1 (en) Audio data processing method and apparatus, and device and storage medium
CN112738635B (en) Method and equipment for playing video information
CN104464743B (en) Method for playing background music in voice chat room and mobile terminal
WO2018014518A1 (en) Photographing processing method and apparatus
CN113965665B (en) Method and equipment for determining virtual live image
CN113490063B (en) Method, device, medium and program product for live interaction
CN114143568A (en) Method and equipment for determining augmented reality live image
CN112818719A (en) Method and device for identifying two-dimensional code
CN113655927A (en) Interface interaction method and device
CN109547830B (en) Method and device for synchronous playing of multiple virtual reality devices
CN112822419A (en) Method and equipment for generating video information
JP6865259B2 (en) Interactive music request method, device, terminal, storage medium and program
CN112261337A (en) Method and equipment for playing voice information in multi-person voice
CN112261236B (en) Method and equipment for mute processing in multi-person voice
WO2017087641A1 (en) Recognition of interesting events in immersive video
CN113329237B (en) Method and equipment for presenting event label information
WO2022179530A1 (en) Video dubbing method, related device, and computer readable storage medium
CN115278333B (en) Method, device, medium and program product for playing video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant