CN113139090A - Interaction method, interaction device, electronic equipment and computer-readable storage medium - Google Patents
Interaction method, interaction device, electronic equipment and computer-readable storage medium Download PDFInfo
- Publication number
- CN113139090A CN113139090A CN202110412699.2A CN202110412699A CN113139090A CN 113139090 A CN113139090 A CN 113139090A CN 202110412699 A CN202110412699 A CN 202110412699A CN 113139090 A CN113139090 A CN 113139090A
- Authority
- CN
- China
- Prior art keywords
- media content
- video media
- video
- time
- playing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 230000003993 interaction Effects 0.000 title claims abstract description 46
- 238000001514 detection method Methods 0.000 claims abstract description 11
- 230000002452 interceptive effect Effects 0.000 claims description 50
- 230000004044 response Effects 0.000 claims description 20
- 238000012545 processing Methods 0.000 claims description 19
- 230000000694 effects Effects 0.000 claims description 14
- 230000008569 process Effects 0.000 abstract description 11
- 230000006870 function Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 6
- 230000001960 triggered effect Effects 0.000 description 5
- 239000012634 fragment Substances 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000003796 beauty Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/738—Presentation of query results
- G06F16/739—Presentation of query results in form of a video summary, e.g. the video summary being a video sequence, a composite still image or having synthesized frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/732—Query formulation
- G06F16/7328—Query by example, e.g. a complete video frame or video sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7847—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Computational Linguistics (AREA)
- Human Computer Interaction (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Library & Information Science (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the disclosure discloses an interaction method, an interaction device, electronic equipment and a computer-readable storage medium. The interaction method comprises the following steps: playing the first video media content; responding to the detection of a picture capturing operation at a first moment, and acquiring a play screenshot corresponding to the first moment; obtaining second video media content corresponding to the first time, wherein the second video media content comprises a part of the first video media content. Through the technical scheme of the embodiment of the disclosure, rich and diverse multimedia contents can be obtained in a flexible mode in the process of playing video media contents.
Description
Technical Field
The present disclosure relates to the field of multimedia content processing, and in particular, to an interaction method, an interaction apparatus, an electronic device, and a computer-readable storage medium.
Background
With the progress of network technology and coding and decoding technology, the distribution market based on video media content is rapidly increased, and users can browse various video media contents through terminal equipment at any time and any place.
Rich and various video media contents are obtained by a user through network distribution, the user can obtain a playing screenshot through picture capturing operation in the process of browsing the video media contents by using terminal equipment, and if the user wants to obtain the fragments of the browsed video media contents, the user can only obtain the fragments through a very complicated mode, for example, the video media contents are stored and then are guided into a specific application program to capture the fragments, the operation is very complicated for professionals, so the technical problem that the user cannot conveniently obtain the fragments of the browsed video media contents exists in the industry.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In order to solve the above technical problem, the embodiments of the present disclosure propose the following technical solutions.
In a first aspect, an embodiment of the present disclosure provides an interaction method, including: playing the first video media content; responding to the detection of a picture capturing operation at a first moment, and acquiring a play screenshot corresponding to the first moment; obtaining second video media content corresponding to the first time, wherein the second video media content comprises a part of the first video media content.
Further, in response to detecting the screen cut operation at the first time, the method further includes: displaying a video content acquisition control; the obtaining of the second video media content corresponding to the first time includes: and responding to the detected triggering operation of the video content acquisition control, and acquiring second video media content corresponding to the first moment.
Further, the video content obtaining control includes prompt information, and the prompt information prompts that the second video media content can be obtained after the video content obtaining control is triggered.
Further, the second video media content corresponding to the first time comprises: the video media content before the first time and having the first duration, the video media content with the first time as an end point and having the first duration, the video media content with the first time after the first time and having the first duration, or the video media content with the first time as a start point and having the first duration.
Further, the second video media content comprises silent video media content.
Further, after obtaining the second video media content corresponding to the first time, the method further comprises: storing the second video media content in a preset storage position; and/or playing the second video media content.
Further, after obtaining the second video media content corresponding to the first time, the method further comprises: and generating third video media content according to the playing screenshot and the second video media content.
Further, the generating a third video media content according to the play screenshot and the second video media content includes: applying a first effect to the play screen shot to generate the third video media content, and/or applying a second effect to the second video media content to generate the third video media content.
Further, after generating the third video media content, the method further comprises: storing the third video media content in a preset storage position; and/or playing the third video media content.
In a second aspect, an embodiment of the present disclosure provides an interactive apparatus, including a playing module and a processing module, where: the playing module is used for playing the first video media content; the processing module is used for responding to the detection of the picture capturing operation at the first moment and acquiring a playing screenshot corresponding to the first moment; the processing module is further configured to obtain a second video media content corresponding to the first time, where the second video media content includes a portion of the first video media content.
Further, the interaction device further comprises a display module, and the display module is used for displaying the video content acquisition control in response to detecting the picture capturing operation at the first moment; the processing module is used for responding to the detection of the triggering operation of the video content acquisition control, and acquiring second video media content corresponding to the first moment.
Further, the video content obtaining control includes prompt information, and the prompt information prompts that the second video media content can be obtained after the video content obtaining control is triggered.
Further, the second video media content corresponding to the first time comprises: the video media content before the first time and having the first duration, the video media content with the first time as an end point and having the first duration, the video media content with the first time after the first time and having the first duration, or the video media content with the first time as a start point and having the first duration.
Further, the second video media content comprises silent video media content.
Further, the interaction device further comprises a storage module, and after second video media content corresponding to the first moment is acquired, the storage module is used for storing the second video media content in a preset storage position; and/or the playing module is further configured to play the second video media content.
Further, after obtaining a second video media content corresponding to the first moment, the processing module is further configured to generate a third video media content according to the play screenshot and the second video media content.
Further, the generating a third video media content according to the play screenshot and the second video media content includes: applying a first effect to the play screen shot to generate the third video media content, and/or applying a second effect to the second video media content to generate the third video media content.
Further, the interactive device further comprises a storage module, and after the third video media content is generated, the storage module is further configured to store the third video media content in a preset storage location; and/or the playing module is further configured to play the third video media content.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: a memory for storing computer readable instructions; and a processor configured to execute the computer readable instructions to cause the electronic device to implement the method according to any of the above first aspects.
In a fourth aspect, the disclosed embodiments provide a non-transitory computer-readable storage medium storing computer-readable instructions which, when executed by a computer, cause the computer to implement the method of any one of the above first aspects.
The embodiment of the disclosure discloses an interaction method, an interaction device, electronic equipment and a computer-readable storage medium. The interaction method comprises the following steps: playing the first video media content; responding to the detection of a picture capturing operation at a first moment, and acquiring a play screenshot corresponding to the first moment; obtaining second video media content corresponding to the first time, wherein the second video media content comprises a part of the first video media content. Through the technical scheme of the embodiment of the disclosure, rich and diverse multimedia contents can be obtained in a flexible mode in the process of playing video media contents.
The foregoing is a summary of the present disclosure, and for the purposes of promoting a clear understanding of the technical means of the present disclosure, the present disclosure may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart of an interaction method provided in an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of an embodiment of an interaction device provided in an embodiment of the present disclosure;
fig. 3 is a schematic view of an interactive interface displayed by an interactive apparatus according to an embodiment of the present disclosure.
Fig. 4 is a schematic structural diagram of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
Fig. 1 is a flowchart of an embodiment of an interaction method provided in an embodiment of the present disclosure, where the interaction method provided in this embodiment may be executed by an interaction apparatus, the interaction apparatus may be implemented as software, or implemented as a combination of software and hardware, and the interaction apparatus may be integrated in a certain device in an interaction system, such as a terminal device. As shown in fig. 1, the method comprises the steps of:
step S101: playing the first video media content;
in step S101, a first video media content, which includes, for example, a sound video or a silent video, may be played through the interactive device. As an example, for example, the interactive apparatus is installed with an Application (APP), and when the APP is executed, the first video media content may be played in an operation interface of the APP. One exemplary scenario includes rendering a live room scenario through the application, through which a user operating the interactive apparatus may enter the live room as a host user or a watch user, and watch a video stream of the live room (as will be understood by those skilled in the art, a video stream in the live room may be considered as video media content); another exemplary scenario includes presenting a video media content playback scenario through the application program, through which a user operating the interactive apparatus may select video media content of interest to himself and view the selected video media content.
Step S102: responding to the detection of a picture capturing operation at a first moment, and acquiring a play screenshot corresponding to the first moment;
in step S102, the interaction apparatus may obtain, in response to detecting a screen capture operation at a first time during playing of the first video media content, a play screenshot corresponding to the first time. As described above, for example, in the process that the user plays the first video media content through the interaction device, the user may trigger a screen capture operation in a preset manner, and the interaction device may detect the screen capture operation triggered by the user at a first time, and then obtain a play screenshot in response to the screen capture operation, where the play screenshot corresponds to the first time. As will be understood by those skilled in the art, the captured screenshot is a screenshot obtained based on the first video media content played by the interactive device, for example, the first video media content played by the interactive device includes a series of consecutive video frames, and the screenshot may include one of the consecutive video frames or a part of the one of the consecutive video frames (for example, an upper half and a lower half of the one of the consecutive video frames or a part obtained by cutting the one of the consecutive video frames according to a preset length and width). Those skilled in the art can also understand that the video screenshot is a played screenshot obtained by the interaction device performing a screen capture operation on the played first video media content in response to detecting the screen capture operation at a first time, so that the played screenshot and the first time have a corresponding relationship. For example, the following scenarios: the interactive apparatus plays a certain video frame of the first video media content at a first time, and the play screenshot includes the certain video frame or a part of the certain video frame, then the play screenshot has the above-mentioned correspondence with the first time, where the certain video frame played at the first time may be a video frame being played at the first time, or may be a first video frame to be played after the first time, and in addition, from an implementation perspective, the play screenshot may be determined based on one or more video frames played before the first time or one or more video frames played after the first time, which is not particularly limited in the embodiment of the present disclosure. As an example, if the interactive apparatus plays or displays a first video frame of the first video media content at the first time, then the play screenshot may include the first video frame or a portion of the first video frame; as yet another example, after the interactive apparatus detects a screen capture operation at a first time, a first video frame or a portion of the first video frame that is to be played or displayed after the first time may be taken as the play screenshot. It should be noted that, in the embodiment of the present disclosure, no limitation is imposed on how to trigger the screen capture operation, how to detect the screen capture operation, and how to obtain the play screenshot in response to the screen capture operation in the process of playing the first video media content, and the obtained play screenshot may also be a play screenshot subjected to special effect processing such as beautifying.
Step S103: obtaining second video media content corresponding to the first time, wherein the second video media content comprises a part of the first video media content.
In step S103, the interactive device obtains a second video media content corresponding to the first time, where the second video media content includes a part of the first video media content, and the second video media content includes, for example, a voiced video or a unvoiced video. As mentioned above, for example, the first video media content played by the interactive device includes a series of consecutive video frames, and the second video media content includes a portion of the first video media content, for example, the first video media content includes 1 st video frame to Q th video frame, the second video media content may include mth video frame to N th video frame (may also include a portion of each of the mth video frame to N th video frame, for example, an upper half of each video frame, a lower half of each video frame, or a portion obtained by cutting each video frame according to a preset length and width), the second video media content may also include mth video frame and N th video frame (may also include a portion of each of the mth video frame and the N th video frame, for example, an upper half of each video frame, the lower half of each video frame, or a portion obtained by cropping each video frame according to a preset length and width), wherein Q, M, and N are natural numbers, and M < N < Q.
As an alternative embodiment, the second video media content corresponding to the first time comprises: the video media content before the first time and having the first duration, the video media content with the first time as an end point and having the first duration, the video media content after the first time and having the first duration, the video media content with the first time as a start point and having the first duration, or the video media content with the first time as a start point and a time point after the first time as an end point. For example, the second video media content comprises video media content before the first time and having a first duration, and referring to the foregoing example, the interactive device plays the qth frame of the first video media content at the first time, the second media content comprises the mth frame to the nth frame of the first video media content, and the second media content has a first duration; for example, the second video media content includes video media content ending at the first time and having the first duration, and referring to the foregoing example, the interactive device plays the qth frame of the first video media content at the first time, the second media content includes the nth frame to the qth frame of the first video media content, and the second media content has the first duration; for example, the second video media content comprises video media content after the first time and having the first duration, and referring to the foregoing example, the interactive device plays to the mth frame of the first video media content at the first time, the second media content comprises the nth to qth frames of the first video media content, and the second media content has the first duration; for example, the second video media content includes video media content starting at the first time and having the first duration, and referring to the foregoing example, the interactive device plays the mth frame of the first video media content at the first time, the second media content includes the mth frame to the nth frame of the first video media content, and the second media content has the first duration; for example, the second video media content includes video media content having the first duration with a time point before the first time as a starting point and a time point after the first time as an ending point, referring to the foregoing example, the interactive device plays the nth frame of the first video media content at the first time, the second media content includes the mth frame to the qth frame of the first video media content, and the second media content has the first duration.
Through the implementation mode, the user can obtain the playing screenshot and the second video media content based on the picture capturing operation in the process of playing the first video media content through the interaction equipment, so that rich and various multimedia contents are flexibly obtained, and better user experience is obtained.
In an optional embodiment, in response to detecting the screen cut operation at the first time, the method further comprises: displaying a video content acquisition control; the obtaining of the second video media content corresponding to the first time includes: and acquiring the second video media content corresponding to the first moment in response to the detection of the triggering operation of the video content acquisition control. For example, when a user operating the interactive device performs a trigger operation on the video content acquisition control, the interactive device detects the trigger operation and acquires the second video media content corresponding to the first moment in response to the trigger operation. As an optional implementation manner, in response to detecting a screen capture operation at a first time, the interaction apparatus may display a first interface, where the first interface includes the play screen capture and also includes the video content acquisition control, for example, the first interface occupies a lower area of an entire display interface of the interaction apparatus, and the first interface may further include other controls, for example, a sharing control for sharing the play screen capture.
Optionally, the video content obtaining control includes prompt information, and the prompt information prompts that the second video media content can be obtained after the video content obtaining control is triggered. For example, on a display screen of the interactive device, the prompt information is displayed on the video content acquisition control, where the prompt information may be, for example, a text message "acquisition is just 10 seconds", which means that after the play screenshot corresponding to the first time is acquired in step S102, a user may perform a trigger operation on the video content acquisition control according to the prompt information, so that the interactive device acquires, in response to detecting the trigger operation on the video content acquisition control, second video media content with a total length of 10 seconds corresponding to the first time (for example, the acquired second media content includes video media content with a time 10 seconds before the first time as a starting point, the first time as an end point, and the total length of 10 seconds in the first video media content).
In yet another alternative embodiment, after obtaining the second video media content corresponding to the first time, the method further comprises: storing the second video media content in a preset storage position; and/or playing the second video media content. For example, after acquiring the second video media content corresponding to the first moment, the interactive device may store the second media content in a preset location, such as in an album, in a draft box, or in a network location; the interactive device may play the second media content such that a user operating the interactive device views the obtained second media content. Optionally, after obtaining the second video media content, the interaction device may provide and display a control corresponding to a storage function, and/or provide and display a control corresponding to a play function, and in response to detecting a trigger operation on the control, perform the step of storing the second video media content and/or perform the step of playing the second video media content.
In another optional embodiment, after obtaining the second video media content corresponding to the first time, the method further comprises: and generating third video media content according to the playing screenshot and the second video media content. For example, after acquiring the second video media content corresponding to the first time, the interaction device may combine the second video media content with the play screenshot acquired in step S102 to generate a third video media content, for example, add the play screenshot as an image frame to the beginning of the second video media content (i.e., the play screenshot is the first image frame of the third video media content), the end (i.e., the play screenshot is the last image frame of the third video media content), or the middle part of the second video media content.
Optionally, the generating a third video media content according to the play screenshot and the second video media content includes: applying a first effect to the play screen shot to generate the third video media content, and/or applying a second effect to the second video media content to generate the third video media content. For example, the interactive device may generate a series of video frames to which a beauty effect or a motion effect is applied according to the play screenshot, and add the series of video frames to a beginning (i.e., the play screenshot is used as a first image frame of the third video media content), an end (i.e., the play screenshot is used as a last image frame of the third video media content), or an intermediate portion of the second video media content to generate the third video media content. For example, the interactive apparatus may apply a preset special effect to the second video media content, and combine the second media content to which the preset special effect is applied with the screenshot to generate the third video media content. In addition, the embodiment of the present disclosure does not limit other various multimedia content processing manners for generating the third video media content according to the screenshot and the second video media content, and various multimedia content processing manners may be applied to the embodiment of the present disclosure.
Optionally, after generating the third video media content, the method further includes: storing the third video media content in a preset storage position; and/or playing the third video media content. For example, after generating the third video media content, the interactive apparatus may store the third media content in a preset location, such as in an album, in a draft box, or in a network location; the interactive device may play the third media content such that a user operating the interactive device views the generated third media content. Optionally, after generating the third video media content, the interaction device may provide and display a control corresponding to a storage function, and/or provide and display a control corresponding to a play function, and in response to detecting a trigger operation on the control, perform the step of storing the third video media content and/or perform the step of playing the third video media content.
Fig. 2 is a schematic structural diagram of an interaction apparatus according to an embodiment of the present disclosure. As shown in fig. 2, the interactive apparatus 200 includes a playing module 201 and a processing module 202, wherein:
the playing module 201 is configured to play a first video media content;
the processing module 202 is configured to, in response to detection of a screen capture operation at a first time, obtain a play screenshot corresponding to the first time;
the processing module 202 is further configured to obtain a second video media content corresponding to the first time, where the second video media content includes a part of the first video media content.
The apparatus shown in fig. 2 can perform the method of the embodiment shown in fig. 1, and reference may be made to the related description of the embodiment shown in fig. 1 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 1, and are not described herein again.
Fig. 3 is a schematic diagram of an embodiment of a display interface provided by the interactive apparatus shown in fig. 2, referring to the foregoing example, the interactive apparatus may be applied in a live room scene, for example, an application program running on the interactive apparatus presents the live room scene, and a user operating the interactive apparatus is, for example, a watching user of the live room scene, as shown in fig. 3, the watching user can watch a video stream 301 (i.e., the first video media content played in step S101) of a main broadcasting user of the live room pushed by a server in a display interface 300 of a screen of the interactive apparatus through the application program, and a comment area 302 associated with the live room is also displayed in fig. 3, and the comment area 302 includes comment information sent by a relevant user of the live room. In the process of watching the video stream of the anchor, the watching user triggers a screen capture operation on the video stream 301 of the anchor user, the interactive device acquires a play screenshot 303 corresponding to a first time in response to detecting the screen capture operation at the first time, and optionally, the acquired play screenshot 303 may be displayed in the interactive interface shown in fig. 3. Those skilled in the art will appreciate that the captured play shot may include only image content of one or more image frames in the first video media content, such as the live room scene described above, and the captured play shot 303 may include an image of one image frame in the video stream 301 of the anchor user of the live room, without including the relevant content of the comment area. Optionally, the interactive apparatus further displays a video content obtaining control 304 in response to detecting a picture capturing operation triggered by the watching user at the first time, where the video content obtaining control 304 includes the text information "download last 10 seconds". The watching user may perform a triggering operation on the video content obtaining control 304, and in response to detecting the triggering operation on the video content obtaining control 304, the interactive apparatus obtains the second video media content corresponding to the first time, for example, obtains a video stream of the main broadcasting user with a total length of 10 seconds before the first time as the second video media content, and stores the second video media content in a draft box of the application program. The user can obtain a playing screenshot and second video media content based on the image capturing operation in the process of playing the first video media content through the interaction equipment, so that rich and various multimedia contents are flexibly obtained, and better user experience is obtained.
It should be noted that, for a specific implementation manner of obtaining the second video media content, the embodiment of the present disclosure is not particularly limited, and any implementation manner may be applied to the embodiment of the present disclosure. As an optional embodiment provided by the present disclosure, in a live broadcast room scene, an anchor user or a watching user uses the interaction device to play a video stream of the anchor user in the live broadcast room, that is, the first video media content, the interaction device obtains a play screenshot corresponding to a first time in response to detecting a picture capture operation at the first time, and obtains the second video media content corresponding to the first time, and the interaction device may obtain the second video media content by: the interactive device sends an acquisition request of second video media content to a server, the acquisition request indicates the first time, the server responds to the acquisition request, second video media content downloading information is generated according to the first time and the duration of the second video media content, for example, the downloading information comprises a URL (uniform resource locator) address corresponding to the second video media content, the interactive device receives the second video media content downloading information from the server, and the second video media content is downloaded from the server according to the second video media content downloading information. It can be understood that, in a live broadcast room scene, the server maintains a corresponding relationship between an image frame and a playing time of a video stream of an anchor user, that is, the first video media content, and, after receiving the acquisition request from the interaction device, the server determines a start time and an end time of the second video media content according to the first time indicated by the acquisition request and the first duration of the second video media content, and further determines, from the first video media content, download information corresponding to the image frame between the start time and the end time, that is, download information of the second video media content, according to the corresponding relationship.
Referring now to FIG. 4, a block diagram of an electronic device 400 suitable for use in implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, electronic device 400 may include a processing device (e.g., central processing unit, graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage device 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other via a communication line 404. An input/output (I/O) interface 405 is also connected to the communication line 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 409, or from the storage device 408, or from the ROM 402. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 401.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: the interaction method in the above embodiment is performed.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the interaction method of any one of the preceding first aspects.
According to one or more embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium characterized by storing computer instructions for causing a computer to perform the interaction method of any one of the preceding first aspects.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Claims (12)
1. An interaction method, comprising:
playing the first video media content;
responding to the detection of a picture capturing operation at a first moment, and acquiring a play screenshot corresponding to the first moment;
obtaining second video media content corresponding to the first time, wherein the second video media content comprises a part of the first video media content.
2. The interactive method of claim 1, wherein in response to detecting a screen cut operation at a first time, the method further comprises:
displaying a video content acquisition control;
the obtaining of the second video media content corresponding to the first time includes:
and acquiring the second video media content corresponding to the first moment in response to the detection of the triggering operation of the video content acquisition control.
3. The interaction method according to claim 2, wherein the video content acquisition control includes a prompt message, and the prompt message prompts that the second video media content can be acquired after the trigger operation is performed on the video content acquisition control.
4. The interaction method according to any one of claims 1 to 3, wherein the second video media content corresponding to the first time comprises: the video media content before the first time and having the first duration, the video media content with the first time as an end point and having the first duration, the video media content after the first time and having the first duration, the video media content with the first time as a start point and having the first duration, or the video media content with the first time as a start point and a time point after the first time as an end point.
5. An interactive method according to any one of claims 1-3, wherein said second video media content comprises silent video media content.
6. The interactive method according to any one of claims 1-3, wherein after obtaining the second video media content corresponding to the first moment, the method further comprises:
storing the second video media content in a preset storage position; and/or
And playing the second video media content.
7. The interactive method according to any one of claims 1-3, wherein after obtaining the second video media content corresponding to the first moment, the method further comprises:
and generating third video media content according to the playing screenshot and the second video media content.
8. The interaction method according to claim 7, wherein the generating of the third video media content from the play screenshot and the second video media content comprises:
applying a first effect to the play screen shot to generate the third video media content, and/or applying a second effect to the second video media content to generate the third video media content.
9. The interactive method of claim 8, wherein after generating the third video media content, the method further comprises:
storing the third video media content in a preset storage position; and/or
And playing the third video media content.
10. An interactive device, comprising a playing module and a processing module, is characterized in that:
the playing module is used for playing the first video media content;
the processing module is used for responding to the detection of the picture capturing operation at the first moment and acquiring a playing screenshot corresponding to the first moment;
the processing module is further configured to obtain a second video media content corresponding to the first time, where the second video media content includes a portion of the first video media content.
11. An electronic device, comprising:
a memory for storing computer readable instructions; and
a processor configured to execute the computer-readable instructions to cause the electronic device to implement the method according to any one of claims 1-9.
12. A computer-readable storage medium storing computer-readable instructions that, when executed by a computer, cause the computer to implement the method of any one of claims 1-9.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110412699.2A CN113139090A (en) | 2021-04-16 | 2021-04-16 | Interaction method, interaction device, electronic equipment and computer-readable storage medium |
US18/555,427 US20240193206A1 (en) | 2021-04-16 | 2022-03-22 | Interaction method and apparatus, electronic device, and computer-readable storage medium |
PCT/CN2022/082134 WO2022218109A1 (en) | 2021-04-16 | 2022-03-22 | Interaction method and apparatus, electronic device, and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110412699.2A CN113139090A (en) | 2021-04-16 | 2021-04-16 | Interaction method, interaction device, electronic equipment and computer-readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113139090A true CN113139090A (en) | 2021-07-20 |
Family
ID=76812899
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110412699.2A Pending CN113139090A (en) | 2021-04-16 | 2021-04-16 | Interaction method, interaction device, electronic equipment and computer-readable storage medium |
Country Status (3)
Country | Link |
---|---|
US (1) | US20240193206A1 (en) |
CN (1) | CN113139090A (en) |
WO (1) | WO2022218109A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022218109A1 (en) * | 2021-04-16 | 2022-10-20 | 北京字节跳动网络技术有限公司 | Interaction method and apparatus, electronic device, and computer readable storage medium |
CN116755597A (en) * | 2023-05-05 | 2023-09-15 | 维沃移动通信有限公司 | Screenshot file control method and device, electronic equipment and storage medium |
WO2024193540A1 (en) * | 2023-03-21 | 2024-09-26 | 北京字跳网络技术有限公司 | Interaction method and apparatus, device and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160378269A1 (en) * | 2015-06-24 | 2016-12-29 | Spotify Ab | Method and an electronic device for performing playback of streamed media including related media content |
CN106303723A (en) * | 2016-08-11 | 2017-01-04 | 网易(杭州)网络有限公司 | Method for processing video frequency and device |
US20170353705A1 (en) * | 2016-06-06 | 2017-12-07 | Samsung Electronics Co., Ltd. | Method for processing signals with operating state-dependent handling of multimedia attributes and electronic device thereof |
CN108924610A (en) * | 2018-07-20 | 2018-11-30 | 网易(杭州)网络有限公司 | Multimedia file processing method, device, medium and calculating equipment |
CN111836112A (en) * | 2020-06-28 | 2020-10-27 | 腾讯科技(深圳)有限公司 | Multimedia file output method, device, medium and electronic equipment |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113139090A (en) * | 2021-04-16 | 2021-07-20 | 北京字节跳动网络技术有限公司 | Interaction method, interaction device, electronic equipment and computer-readable storage medium |
-
2021
- 2021-04-16 CN CN202110412699.2A patent/CN113139090A/en active Pending
-
2022
- 2022-03-22 US US18/555,427 patent/US20240193206A1/en active Pending
- 2022-03-22 WO PCT/CN2022/082134 patent/WO2022218109A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160378269A1 (en) * | 2015-06-24 | 2016-12-29 | Spotify Ab | Method and an electronic device for performing playback of streamed media including related media content |
US20170353705A1 (en) * | 2016-06-06 | 2017-12-07 | Samsung Electronics Co., Ltd. | Method for processing signals with operating state-dependent handling of multimedia attributes and electronic device thereof |
CN106303723A (en) * | 2016-08-11 | 2017-01-04 | 网易(杭州)网络有限公司 | Method for processing video frequency and device |
CN108924610A (en) * | 2018-07-20 | 2018-11-30 | 网易(杭州)网络有限公司 | Multimedia file processing method, device, medium and calculating equipment |
CN111836112A (en) * | 2020-06-28 | 2020-10-27 | 腾讯科技(深圳)有限公司 | Multimedia file output method, device, medium and electronic equipment |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022218109A1 (en) * | 2021-04-16 | 2022-10-20 | 北京字节跳动网络技术有限公司 | Interaction method and apparatus, electronic device, and computer readable storage medium |
WO2024193540A1 (en) * | 2023-03-21 | 2024-09-26 | 北京字跳网络技术有限公司 | Interaction method and apparatus, device and storage medium |
CN116755597A (en) * | 2023-05-05 | 2023-09-15 | 维沃移动通信有限公司 | Screenshot file control method and device, electronic equipment and storage medium |
WO2024230571A1 (en) * | 2023-05-05 | 2024-11-14 | 维沃移动通信有限公司 | Screenshot file control method and apparatus, and electronic device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
US20240193206A1 (en) | 2024-06-13 |
WO2022218109A1 (en) | 2022-10-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112261459B (en) | Video processing method and device, electronic equipment and storage medium | |
CN112259062B (en) | Special effect display method and device, electronic equipment and computer readable medium | |
EP4145837A1 (en) | Video processing method and apparatus, device and medium | |
CN111629251B (en) | Video playing method and device, storage medium and electronic equipment | |
CN113139090A (en) | Interaction method, interaction device, electronic equipment and computer-readable storage medium | |
CN110267113B (en) | Video file processing method, system, medium, and electronic device | |
CN113286197A (en) | Information display method and device, electronic equipment and storage medium | |
CN112312225B (en) | Information display method and device, electronic equipment and readable medium | |
US12167085B2 (en) | Video processing method and apparatus, storage medium, and electronic device | |
CN114125551B (en) | Video generation method, device, electronic equipment and computer readable medium | |
CN109462779B (en) | Video preview information playing control method, application client and electronic equipment | |
CN109168012B (en) | Information processing method and device for terminal equipment | |
CN115062168A (en) | Media content display method, device, device and storage medium | |
CN109582274B (en) | Volume adjusting method and device, electronic equipment and computer readable storage medium | |
CN109168027B (en) | Instant video display method and device, terminal equipment and storage medium | |
CN114860139A (en) | Video playing method, video playing device, electronic equipment, storage medium and program product | |
CN114584716B (en) | Picture processing method, device, equipment and storage medium | |
CN111246245A (en) | Method and device for pushing video aggregation page, server and terminal equipment | |
CN115878242A (en) | Media content display method, device, equipment, readable storage medium and product | |
CN115278375A (en) | Resource pushing method, device, equipment and medium | |
CN113992926A (en) | Interface display method and device, electronic equipment and storage medium | |
CN113885741A (en) | A multimedia processing method, device, equipment and medium | |
CN112287171A (en) | Information processing method and device and electronic equipment | |
CN118283364A (en) | Online video editing method and device, electronic equipment and storage medium | |
CN111385638B (en) | Video processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |