CN116668760A - Method, apparatus, device and storage medium for interface interaction and editing video - Google Patents

Method, apparatus, device and storage medium for interface interaction and editing video Download PDF

Info

Publication number
CN116668760A
CN116668760A CN202310771942.9A CN202310771942A CN116668760A CN 116668760 A CN116668760 A CN 116668760A CN 202310771942 A CN202310771942 A CN 202310771942A CN 116668760 A CN116668760 A CN 116668760A
Authority
CN
China
Prior art keywords
commodity
merchandise
target
video
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310771942.9A
Other languages
Chinese (zh)
Inventor
宋洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202310771942.9A priority Critical patent/CN116668760A/en
Publication of CN116668760A publication Critical patent/CN116668760A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/47815Electronic shopping

Abstract

Embodiments of the present disclosure provide methods, apparatuses, devices, and storage medium for interface interaction and editing of video. The interface interaction method comprises the following steps: presenting a play interface of video content; in the playing interface, the commodity label associated with the commodity object is displayed in association with the commodity object in the video content; and presenting merchandise information associated with the target object in response to the selection of the merchandise tag. Therefore, commodity labels can be displayed by directly associating commodity objects in a video playing scene, active operation of viewers is not needed, video sharing is performed in a label mode, and influence of commodity information on video content can be reduced. In this way, the viewing experience of the viewer may be improved.

Description

Method, apparatus, device and storage medium for interface interaction and editing video
Technical Field
Example embodiments of the present disclosure relate generally to the field of computers, and more particularly, relate to methods, apparatuses, devices, and computer-readable storage media for interface interaction and editing video.
Background
With the development of computer technology, various multimedia contents have become one of the main ways for people to obtain the contents. The multimedia content may include, for example, pictures, video, text, audio, and so forth. Some authors may share some specific goods by making multimedia content. Thus, it is a current focus of attention for the creator how to simply and conveniently insert recommended or related merchandise information in multimedia content (especially video) without affecting the viewer's multimedia content viewing experience.
Disclosure of Invention
In a first aspect of the present disclosure, a method for interface interaction is provided. The method comprises the following steps: presenting a play interface of video content; in the playing interface, the commodity label associated with the commodity object is displayed in association with the commodity object in the video content; and presenting merchandise information associated with the target object in response to the selection of the merchandise tag.
In a second aspect of the present disclosure, a method for editing video is provided. The method comprises the following steps: presenting an editing interface for the target video, wherein the editing interface comprises a commodity marking control; determining a target commodity to be associated to the target video based on the selection of the commodity marking control; and adding the merchandise tag associated with the target merchandise to the target video based on the tag adding operation, the tag adding operation indicating at least a presentation location of the merchandise tag in the target video.
In a third aspect of the present disclosure, an apparatus for interface interaction is provided. The device comprises: a playback interface presentation module configured to present a playback interface of video content; the commodity label display module is configured to be associated with a commodity object in the video content in the playing interface and display a commodity label associated with the commodity object; and a merchandise information presentation module configured to present merchandise information associated with the target object in response to a selection of the merchandise tag.
In a fourth aspect of the present disclosure, an apparatus for editing video is provided. The device comprises: the editing interface presenting module is configured to present an editing interface aiming at the target video, wherein the editing interface comprises a commodity marking control; the target commodity determining module is configured to determine a target commodity to be associated with the target video based on the selection of the commodity marking control; and an article tag adding module configured to add an article tag associated with the target article to the target video based on a tag adding operation, the tag adding operation indicating at least a presentation position of the article tag in the target video.
In a fifth aspect of the present disclosure, an electronic device is provided. The apparatus comprises at least one processing unit; and at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit. The instructions, when executed by at least one processing unit, cause the electronic device to perform the method of the first or second aspect.
In a sixth aspect of the present disclosure, a computer readable storage medium is provided. A medium having stored thereon a computer program which, when executed by a processor, implements the method of the first or second aspect.
It should be understood that what is described in this section is not intended to limit the key features or essential features of the embodiments of the present disclosure, nor is it intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, wherein like or similar reference numerals denote like or similar elements, in which:
FIG. 1 illustrates a schematic diagram of an example environment in which embodiments of the present disclosure may be implemented;
FIG. 2A illustrates a schematic diagram of an example of a play interface according to some embodiments of the present disclosure;
FIG. 2B illustrates a schematic diagram of an example of a merchandise interface according to some embodiments of the present disclosure;
3A-3C illustrate schematic diagrams of examples of editing interfaces according to some embodiments of the present disclosure;
FIG. 4 illustrates a flow chart of a process for interface interaction according to some embodiments of the present disclosure;
FIG. 5 illustrates a flow chart of a process for editing video in accordance with some embodiments of the present disclosure;
FIG. 6 illustrates a schematic block diagram of an apparatus for interface interaction in accordance with certain embodiments of the present disclosure;
FIG. 7 illustrates a schematic block diagram of an apparatus for editing video according to some embodiments of the present disclosure; and
fig. 8 illustrates a block diagram of an electronic device capable of implementing various embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been illustrated in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather, these embodiments are provided so that this disclosure will be more thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that any section/subsection headings provided herein are not limiting. Various embodiments are described throughout this document, and any type of embodiment may be included under any section/subsection. Furthermore, the embodiments described in any section/subsection may be combined in any manner with any other embodiment described in the same section/subsection and/or in a different section/subsection.
In describing embodiments of the present disclosure, the term "comprising" and its like should be taken to be open-ended, i.e., including, but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The term "some embodiments" should be understood as "at least some embodiments". Other explicit and implicit definitions are also possible below. The terms "first," "second," and the like, may refer to different or the same object. Other explicit and implicit definitions are also possible below.
Embodiments of the present disclosure may relate to user data, the acquisition and/or use of data, and the like. These aspects all follow corresponding legal and related regulations. In embodiments of the present disclosure, all data collection, acquisition, processing, forwarding, use, etc. is performed with knowledge and confirmation by the user. Accordingly, in implementing the embodiments of the present disclosure, the user should be informed of the type of data or information, the range of use, the use scenario, etc. that may be involved and obtain the authorization of the user in an appropriate manner according to the relevant laws and regulations. The particular manner of notification and/or authorization may vary depending on the actual situation and application scenario, and the scope of the present disclosure is not limited in this respect.
In the present description and embodiments, if the personal information processing is concerned, the processing is performed on the premise of having a validity base (for example, obtaining agreement of the personal information body, or being necessary for executing a contract, etc.), and the processing is performed only within a prescribed or contracted range. The user refuses to process the personal information except the necessary information of the basic function, and the basic function is not influenced by the user.
As mentioned briefly above, multimedia contents have become an important medium for commodity sharing, and with the development of computer technology, it is most common to perform commodity sharing in video contents in particular. In some scenes, an creator of a video can share goods in the video by adding a goods link or an anchor point, but the goods sharing mode can lead to stronger marketing sense of the goods, can influence the people setting and video flow of the creator, and can also influence the watching experience of a viewer. In other scenes, the creator of the video may introduce the commodity by adding the subtitle to achieve the purpose of commodity sharing, but such a subtitle adding method may result in higher creation cost of the video and failure of the linear commodity.
To this end, one solution provides for providing merchandise tags to link content and merchandise after a video pause. In this case, when the video is paused, the merchandise information included in the pause screen frame may be automatically recognized, and provided in the form of a tag. The method may include, in response to receiving a triggering operation for a tag, jumping to a merchandise interface displaying merchandise information. Although this approach can reduce the authoring cost of the author, depending on the active pause of the video by the viewer, there is a certain operation threshold and this will affect the video viewing experience of the viewer and will affect the exposure of the video. In addition, the label can only display commodity categories (such as short sleeves, jeans and the like), cannot display more commodity information, and the information quantity is insufficient, so that the commodity is unattractive, and the user decides that the purchasing efficiency is low.
To this end, embodiments of the present disclosure propose an improved solution for interface interaction and editing video. According to the scheme, a playing interface of the video content can be presented when the interface is interacted, commodity labels associated with commodity objects in the video content are displayed in the playing interface, and commodity information associated with target objects is presented in response to selection of the commodity labels. Therefore, commodity labels can be displayed by directly associating commodity objects in a video playing scene, active operation of viewers is not needed, video sharing is performed in a label mode, and influence of commodity information on video content can be reduced. In this way, the viewing experience of the viewer may be improved.
When editing the video, an editing interface for the target video including a merchandise marker control may be presented, a target merchandise to be associated with the target video may be determined based on selection of the merchandise marker control, and a merchandise tag associated with the target merchandise may be added to the target video based on a tag adding operation that indicates at least a presentation position of the merchandise tag in the target video. In this way, the video content can be supported to be tagged by the creator when the creator edits the video, so that the creator can simply and conveniently insert the recommended or related commodity information into the video, and commodity sharing efficiency and video creation experience of the creator can be improved.
Various example implementations of the scheme are described in further detail below in conjunction with the accompanying drawings.
Example Environment
FIG. 1 illustrates a schematic diagram of an example environment 100 in which embodiments of the present disclosure may be implemented. As shown in fig. 1, an example environment 100 may include a terminal device 110.
In this example environment 100, an application 120 is installed in a terminal device 110. The user 140 may interact with the application 120 via the terminal device 110 and/or its attached device. The application 120 may be a social application, a content sharing application, a shopping application, etc., or any other suitable application.
In the environment 100 of fig. 1, if the application 120 is in an active state, the application 120 may provide services such as creation or playback of multimedia content and/or merchandise search or purchase to the user 140.
In addition, terminal device 110 may present interface 150 of application 120. The content presented by the interface 150 also changes depending on the particular service provided, the user's interaction/preset actions, etc.
In some embodiments, terminal device 110 communicates with server 130 to enable provisioning of services for application 120. The terminal device 110 may be any type of mobile terminal, fixed terminal, or portable terminal, including a mobile handset, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, media computer, multimedia tablet, personal Communication System (PCS) device, personal navigation device, personal Digital Assistant (PDA), audio/video player, digital camera/camcorder, positioning device, television receiver, radio broadcast receiver, electronic book device, game device, or any combination of the preceding, including accessories and peripherals for these devices, or any combination thereof. In some embodiments, terminal device 110 is also capable of supporting any type of interface to the user (such as "wearable" circuitry, etc.).
The server 130 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a content distribution network, basic cloud computing services such as big data and an artificial intelligence platform. Server 130 may include, for example, a computing system/server, such as a mainframe, edge computing node, computing device in a cloud environment, and so on. Server 130 may provide background services for applications 120 in terminal device 110 that support content presentation.
A communication connection may be established between the server 130 and the terminal device 110. The communication connection may be established by wired means or wireless means. The communication connection may include, but is not limited to, a bluetooth connection, a mobile network connection, a universal serial bus connection, a wireless fidelity connection, etc., as embodiments of the disclosure are not limited in this respect. In an embodiment of the present disclosure, the server 130 and the terminal device 110 may implement signaling interaction through a communication connection therebetween.
It should be understood that the structure and function of the various elements in environment 100 are described for illustrative purposes only and are not meant to suggest any limitation as to the scope of the disclosure.
Example interfaces for interface interactions
Fig. 2A illustrates a schematic diagram of an example of a play interface 200A, according to some embodiments of the present disclosure. As shown in fig. 2A, while the application 120 is in a running state, the terminal device 110 may present a playback interface 200A corresponding to the video content 210. In some embodiments, responsive to the application 120 being in a running state, the terminal device 110 may present the playback interface 200A of the video content 210 by default. In some embodiments, terminal device 110 may also provide a playback interface 200A as shown in fig. 2A upon receiving an access request for video content 210 while application 120 is in a running state. It should be appreciated that the video content 210 and the playback interface 200A may be of any suitable style, and the present disclosure is not intended to be limited to a particular style of video content and playback interface.
The terminal device 110 may display the merchandise tag associated with the merchandise object in the video content 210 in the play interface 200A in association with the merchandise object. As shown in fig. 2A, terminal device 110 may be associated with shirt 215 in play interface 200A, displaying a label 220 associated with shirt 215. It should be noted that, although the shirt 215 and the tag 220 associated therewith are shown in fig. 2A, in the case that other merchandise objects (e.g., pants) are also included in the video content 210, the terminal device 110 may also determine other merchandise objects also included in the video content, and display the merchandise tags associated with the other merchandise objects in association with the other merchandise objects, and the present disclosure is not limited to the merchandise objects and the style, number, kind, etc. of the merchandise tags in the playing interface 200A.
In some embodiments, the merchandise tag associated with the merchandise object may indicate merchandise description information associated with the merchandise object. For example, a commodity object in a video frame may be detected using an appropriate object detection technique and its corresponding commodity description information determined. The merchandise descriptive information may include, for example, but is not limited to, the type of merchandise, price, store, style, whether to package mail, whether there is a shipping risk, fit seasons, and the like. As shown in fig. 2A, the commodity description information indicated by the label 220 includes "casual shirt, spring and summer, package post, short sleeve, discount price XX".
Regarding the manner in which the merchandise tags are generated, in some embodiments, the merchandise tags may be generated based on identification of merchandise objects in the video content. Specifically, the terminal device 110 may identify the video content 210 to determine whether the commodity object is contained in the video content 210. The recognition mode may be, for example, recognition by a pre-trained recognition model, and the terminal device 110 may provide the video or an image frame in the video to the recognition model and obtain a recognition result of the model. The terminal device 110 may further generate a merchandise tag for the merchandise object and display the merchandise tag in association with the merchandise object in the playback interface 200A when the merchandise object is identified. The generating manner may be generated by, for example, a pre-trained label generating model, and the terminal device 110 may input the commodity object into the label generating model, and obtain the commodity label matched with the commodity object output by the model.
In some embodiments, the merchandise tags may also be generated based on editing operations of the publisher of the video content 210. Specifically, the video content 210 may be video content captured in real time by a publisher (may also be referred to as an creator, a publisher, etc.) through an image capturing device (e.g., a camera), or may be video content uploaded by the publisher (e.g., video content uploaded by the publisher and stored locally in advance or video content published by other publishers forwarded by the publisher). Before issuing the video, the terminal device 110 may determine a commodity label of the commodity object in response to receiving an edit operation of the issuing party on the commodity object. The terminal device 110 in turn adds the merchandise tag to the video content 210 so that the merchandise tag is displayed in association with the merchandise object in the play interface 200A when the video content 210 is played.
Regarding the display location of the merchandise tag in the play interface 200A, in some embodiments, this display location may be fixed. For example, the fixed display is displayed at a certain position of the playing interface 200A, which may be preset by the user or may be determined by the terminal device 110. In some embodiments, the display location of the merchandise tag may also be varied based on the location of the merchandise object in the video content 210. For example, the merchandise tag may change (e.g., from location a to location B) as the merchandise object changes in the video content 210, then the merchandise tag changes from location a to location B against the merchandise object. In this case, the commodity label may be fixedly displayed at a certain position of the commodity object, or may be changed with the position of the commodity object, and the position of the commodity object may be changed with the position of the commodity object. For example, taking a merchandise object as a shirt, a merchandise tag may initially be displayed at the sleeve of the shirt and as the position of the shirt in video content 210 changes, the merchandise tag may change to be displayed at the collar of the shirt.
In some embodiments, at least one operational control is also included in the playback interface 200A. The at least one operational control includes, but is not limited to, a user's identification control (e.g., user avatar, user name), a praise control, a collection control, a share control, a comment control, and the like. To avoid the problem of occlusion caused by overlap between the at least one operational control and the merchandise tag, in some embodiments, the merchandise tag may be displayed in a position that does not overlap a target control in the at least one operational control, where the target control may be at least a portion of the at least one operational control. Illustratively, as shown in FIG. 2A, the tab 220 does not overlap with the plurality of operational controls displayed on the right side of the playback interface 200A. In some embodiments, the display location of the merchandise tag may also be non-overlapping with the text display area in the video content 210. For example, the label 220 shown in fig. 2A does not overlap with the video description information displayed on the lower left side of the playback interface 200A.
Further, the terminal device 110 may present merchandise information associated with the target object in response to the selection of the merchandise tag. The target object is the commodity object associated with the commodity label. As shown in fig. 2A, if the terminal device receives a selection of the tag 220, merchandise information associated with the shirt 215 may be presented. In some embodiments, terminal device 110 may determine that a selection for an item tag is received in response to detecting a trigger operation for the item tag. The triggering operation herein includes, but is not limited to, a click operation, a long press operation, a double click operation, a slide operation, and the like.
Regarding the manner of presentation of the merchandise information, in some embodiments, the terminal device 110 may present the merchandise information by presenting a merchandise interface associated with the target object. Fig. 2B illustrates a schematic diagram of an example of a merchandise interface 200B according to some embodiments of the present disclosure. As shown in fig. 2A and 2B, the terminal device 110 may present the merchandise interface 200B shown in fig. 2B in response to receiving a selection of the tab 220 in the play interface 200A.
In some embodiments, the merchandise information includes merchandise search results that match the merchandise object. Regarding the determination of whether there is a match, in some embodiments, the matching relationship between the merchandise and the merchandise object may be pre-set by the issuer. For example, if the issuer sets in advance that the commodity a matches the commodity object a, the commodity a is presented in the commodity interface as a commodity search result matching the commodity object in response to receiving a selection of the commodity label a associated with the commodity object a.
In some embodiments, the terminal device 110 may also determine that the commodity is matched with the commodity object by determining a degree of matching between different commodities and the commodity object, and determining that the commodity with the degree of matching higher than the threshold is a commodity search result that matches the commodity object. Note that in this case, the commodity search result presented by the terminal device 110 may include a plurality of commodities. The terminal device 110 may preferentially present the commodity with the highest matching degree.
As shown in fig. 2B, the terminal device 110 may present the merchandise search results by presenting a merchandise portal 230 at the merchandise interface 200B, the merchandise portal 230 being a merchandise portal corresponding to the shirt 215 in fig. 2A. Such merchandise portal 230 may lead to a corresponding merchandise detail interface for completing a view or purchase of the corresponding merchandise, etc. The commodity entrance 230 may preferentially present the commodity 231 having the highest matching degree with the commodity object. In some embodiments, the user may also access other potentially matching merchandise search results by clicking on the "View all merchandise" option 235.
In some embodiments, the merchandise information also includes media content associated with the merchandise object. Such as video, pictures, audio, text content, etc. of merchandise objects. As shown in fig. 2B, in some embodiments, terminal device 110 may present media content associated with a merchandise object by presenting a merchandise card in merchandise portal 230.
In summary, the embodiment of the disclosure can directly associate the commodity object to display the commodity label in the video playing scene, so that the viewer does not need to actively operate, and the video sharing is performed in a label mode, so that the influence of commodity information on the video content can be reduced. In this way, the viewing experience of the viewer may be improved.
Example interface for editing video
Fig. 3A-3C illustrate schematic diagrams of examples of editing interfaces according to some embodiments of the present disclosure. As shown in fig. 3A, while the application 120 is in a running state, the terminal device 110 may also present an editing interface 300A corresponding to the video for editing. In some embodiments, while the application 120 is in a running state, the terminal device 110 may also provide an editing interface 300A as shown in fig. 3A upon receiving an editing request for a target video. The editing request may be, for example, an authoring request for an unpublished target video or an editing request for a published target video. It should be appreciated that editing interface 300A may be any suitable style and that the present disclosure is not intended to be limited to a particular style of editing interface.
As shown in fig. 3A, editing interface 300A includes a merchandise marking control 301. It is to be appreciated that the editing interface 300A presented by the terminal device 110 can also include a number of other operational controls for editing video, such as a cropping control, a setting background control, and so forth.
Terminal device 110 can determine a target merchandise to be associated with the target video based on selection of merchandise tagging control 301. In particular, in response to selection of merchandise tagging control 301, terminal device 110 may present a set of candidate merchandise associated with the target video. The set of candidate merchandise is determined by terminal device 110 based on the target video. For example, a suitable object detection technique may be utilized to detect merchandise objects in the target video frame and determine a set of merchandise having a degree of match above a predetermined threshold, i.e., a set of candidate merchandise associated with the target video. As shown in fig. 3A and 3B, terminal device 110 can determine that a selection of merchandise tagging control 301 was received in response to detecting a trigger operation of merchandise tagging control 301, thereby presenting merchandise selection portal 310. A set of candidate merchandise 315 (including candidate merchandise 315-1, 315-3, and 315-3) associated with the target video is presented in the merchandise selection portal 310. In some embodiments, terminal device 110 may also present a corresponding merchandise category, such as "shoe pack apparel," at each candidate merchandise 315.
In some embodiments, the terminal device 110 may determine a target merchandise to be associated with the target video in response to receiving a selection of a target merchandise from the set of candidate merchandise 315. For example, if the terminal device 110 receives a selection for the candidate good 315-1, it determines that the candidate good 315-1 is the target good.
As shown in fig. 3B, in some embodiments, the item selection portal 310 is also presented with a search portal 311. Terminal device 110 may obtain the merchandise search terms in response to user input received at search portal 311. The terminal device 110 may in turn determine a target merchandise to be associated with the target video based on the merchandise search term. For example, if the search term input by the user is "casual shirt", the terminal device 110 may determine an article from among a plurality of articles, the corresponding search term matching the search term input by the user, and determine it as the target article.
The terminal device 110 may in turn add the merchandise tag associated with the target merchandise to the target video based on the tag addition operation. The tag adding operation may include, for example, a series of operations for an article tag, such as a position setting operation, a time setting operation, and the like. As shown in fig. 3B and 3C, in response to receiving the selection of the candidate good 315-1 in fig. 3B, the candidate good 315-1 is determined to be the target good and it may be determined that the tag-add operation was received. The terminal device 110 in turn presents an editing interface 300C of the target video 320 as shown in fig. 3C. An item tag 325 corresponding to the target item may be presented in association with the target video 320 in the editing interface 300C.
The tag-adding operation may at least indicate a presentation location of the merchandise tag 325 in the target video 320. In some embodiments, the presentation location of the merchandise tag 325 in the target video 320 may be default. In some embodiments, terminal device 110 may also alter the presentation location of merchandise tag 325 in response to receiving a location setting operation in the tag addition operation. For example, the terminal device 110 may determine that a position setting operation for the merchandise tag 325 is received in response to receiving a drag operation for the merchandise tag 325, and further alter the presentation position of the merchandise tag 325 in the target video 320 based on the position indicated by the drag operation.
The tag addition operation may also indicate a corresponding period of time for which the merchandise tag 325 is to be presented. In some embodiments, the corresponding time period for which the merchandise tag 325 is to be presented in the target video 320 may be default, e.g., 10 seconds of default presentation. In some embodiments, terminal device 110 may also alter the corresponding time period for which merchandise tag 325 is to be presented in response to receiving a time setting operation in the tag addition operation. As shown in fig. 3C, a tag setting area 330 is also presented in the editing interface 300C, and individual video frames of the target video 320 may be presented in time sequence in the tag setting area 330. The label setting area 330 further presents a duration selection control 332, and the terminal device 110 may adjust a plurality of continuous video frames covered by the duration selection control 332 in response to a sliding operation on the left and right sides of the duration selection control 332 or a sliding operation on the whole duration selection control 332. Terminal device 110 may in turn determine to present merchandise tag 325 in the selected plurality of consecutive video frames. In some embodiments, terminal device 110 may also present the selected presentation duration corresponding to the plurality of consecutive video frames in tag settings area 330, e.g., "selected tag duration 5.0 seconds" to prompt the user for the corresponding time period of the currently set band presentation.
The tag addition operation may also indicate the text content of the merchandise tag 325. The text content may include, for example, first text content determined based on the target good and/or second text content determined based on a second input by the user. The first text content herein may be commodity description information that the terminal device 110 directly determines based on the target commodity, which may indicate that it is associated with the target commodity. For example, after determining the target commodity, the terminal device 110 may directly determine the commodity description information matched with the target commodity, and further determine the first text content in the commodity label 325.
The terminal device 110 may default to presenting the first text content in the text content of the merchandise tag 325 and, upon receipt of user input, add the second text content to the text content. In some embodiments, the second text content may be supplemental to the first text content if a user supplemental operation to the first text content is received. For example, if the first text content determined by the terminal device 110 is "casual shirt", the terminal device 110 may receive the second text content "XX store" input by the user in response to receiving the user's operation of adding the text content of the merchandise tag 325, and may further determine that the text content of the merchandise tag 325 is "casual shirt XX store".
In some embodiments, if a user modification operation to the first text content is received, it may be determined that the second text content entered by the user may be used as a modification to the first text content. For example, if the first text content determined by the terminal device 110 is "casual shirt", the terminal device 110 may receive the second text content "short sleeved shirt" input by the user in response to receiving the editing operation of the text content of the merchandise tag 325 by the user, and further determine that the text content of the merchandise tag 325 is "short sleeved shirt".
Further, the terminal device 110 may determine that the tag addition operation is completed in response to receiving a selection operation of the add tag control 331 in the editing interface 300C shown in fig. 3C.
Therefore, the embodiment of the disclosure can directly associate the commodity object to display the commodity label in the video playing scene, does not need the active operation of the viewer, and can reduce the influence of commodity information on the video content by sharing the video in a label mode. In addition, the creator may be supported to tag the video content as it is edited. In this way, the viewing experience of the viewer can be improved, and the creator can simply and conveniently insert the recommended or related commodity information in the video.
In summary, the embodiment of the disclosure may support the creator to tag the video content when editing the video, so that the creator may simply and conveniently insert the recommended or related merchandise information into the video, thereby improving the merchandise sharing efficiency and the video authoring experience of the creator.
Example procedure
Fig. 4 illustrates a flow chart of a process 400 for interface interaction according to some embodiments of the present disclosure. Process 400 may be implemented at terminal device 110. Process 400 is described below with reference to fig. 1.
As shown in fig. 4, at block 410, terminal device 110 presents a playback interface for video content.
In block 420, the terminal device 110 displays the merchandise tag associated with the merchandise object in the video content in association with the merchandise object in the playback interface.
At block 430, terminal device 110 presents merchandise information associated with the target object in response to the selection of the merchandise tag.
In some embodiments, the merchandise tags are generated based on an identification of merchandise objects in the video content.
In some embodiments, the merchandise tags are generated based on editing operations of the publisher of the video content.
In some embodiments, the merchandise tag indicates merchandise description information associated with the merchandise object.
In some embodiments, the display position of the merchandise tag varies based on the position of the merchandise object in the video content.
In some embodiments, the playback interface further includes a target control, wherein the display location of the merchandise tag does not overlap the target control.
In some embodiments, the merchandise information includes: commodity search results matched with commodity objects; and/or media content associated with merchandise objects.
In some embodiments, the video content includes uploaded video content.
Fig. 5 illustrates a flow chart of a process 500 for editing video according to some embodiments of the present disclosure. Process 500 may be implemented at terminal device 110. Process 500 is described below with reference to fig. 1.
As shown in fig. 5, at block 510, terminal device 110 presents an editing interface for the target video, the editing interface including a merchandise marker control.
At block 520, the terminal device 110 determines a target merchandise to be associated with the target video based on the selection of the merchandise marker control.
At block 530, terminal device 110 adds an item tag associated with the target item to the target video based on the tag addition operation, the tag addition operation indicating at least a presentation location of the item tag in the target video.
In some embodiments, determining a target commodity to be associated to a target video includes: presenting a set of candidate merchandise associated with the target video; and receiving a selection for a target commodity in the set of candidate commodities.
In some embodiments, a set of candidate merchandise is determined based on the target video.
In some embodiments, determining a target commodity to be associated to a target video includes: acquiring commodity search words based on first input of a user; and determining a target commodity to be associated to the target video based on the commodity search term.
In some embodiments, the tag-adding operation also indicates a corresponding period of time for which the merchandise tag is to be presented.
In some embodiments, the tag-adding operation further indicates text content of the merchandise tag, the text content including: a first text content determined based on the target commodity; and/or a second text content determined based on a second input by the user.
In some embodiments, the first text content indicates merchandise description information associated with the target merchandise.
Example apparatus and apparatus
Embodiments of the present disclosure also provide corresponding apparatus for implementing the above-described methods or processes. Fig. 6 illustrates a schematic block diagram of an apparatus 600 for interface interaction according to some embodiments of the present disclosure. The apparatus 600 may be implemented as or included in the terminal device 110. The various modules/components in apparatus 600 may be implemented in hardware, software, firmware, or any combination thereof.
As shown, the apparatus 600 includes a playback interface presentation module 610 configured to present a playback interface for video content. The apparatus 600 further includes a merchandise tag display module 620 configured to display, in the play interface, merchandise tags associated with the merchandise objects in the video content in association with the merchandise objects. The apparatus 600 further includes a merchandise information presentation module 630 configured to present merchandise information associated with the target object in response to a selection of the merchandise tag.
In some embodiments, the merchandise tags are generated based on an identification of merchandise objects in the video content.
In some embodiments, the merchandise tags are generated based on editing operations of the publisher of the video content.
In some embodiments, the merchandise tag indicates merchandise description information associated with the merchandise object.
In some embodiments, the display position of the merchandise tag varies based on the position of the merchandise object in the video content.
In some embodiments, the playback interface further includes a target control, wherein the display location of the merchandise tag does not overlap the target control.
In some embodiments, the merchandise information includes: commodity search results matched with commodity objects; and/or media content associated with merchandise objects.
In some embodiments, the video content includes uploaded video content.
Embodiments of the present disclosure also provide corresponding apparatus for implementing the above-described methods or processes. Fig. 7 illustrates a schematic block diagram of an apparatus 700 for editing video according to some embodiments of the present disclosure. The apparatus 700 may be implemented as or included in the terminal device 110. The various modules/components in apparatus 700 may be implemented in hardware, software, firmware, or any combination thereof.
As shown, the apparatus 700 includes an editing interface presentation module 710 configured to present an editing interface for a target video, the editing interface including a merchandise marker control. The apparatus 700 further includes a target merchandise determination module 720 configured to determine a target merchandise to be associated with the target video based on the selection of the merchandise marker control. The apparatus 700 further includes an item tag adding module 730 configured to add an item tag associated with the target item to the target video based on the tag adding operation, the tag adding operation indicating at least a presentation location of the item tag in the target video.
In some embodiments, the target commodity determination module 720 includes: a candidate merchandise presentation module configured to present a set of candidate merchandise associated with a target video; and a selection receiving module configured to receive a selection of a target commodity from a set of candidate commodities.
In some embodiments, a set of candidate merchandise is determined based on the target video.
In some embodiments, the target commodity determination module 720 includes: the search term acquisition module is configured to acquire commodity search terms based on first input of a user; and a determining module configured to determine a target commodity to be associated to the target video based on the commodity search term.
In some embodiments, the tag-adding operation also indicates a corresponding period of time for which the merchandise tag is to be presented.
In some embodiments, the tag-adding operation further indicates text content of the merchandise tag, the text content including: a first text content determined based on the target commodity; and/or a second text content determined based on a second input by the user.
In some embodiments, the first text content indicates merchandise description information associated with the target merchandise.
The modules and/or units included in apparatus 600 and apparatus 700 may be implemented in various ways, including software, hardware, firmware, or any combination thereof. In some embodiments, one or more modules and/or units may be implemented using software and/or firmware, such as machine executable instructions stored on a storage medium. In addition to or in lieu of machine-executable instructions, some or all of the modules and/or units in apparatus 600 and apparatus 700 may be implemented at least in part by one or more hardware logic components. By way of example and not limitation, exemplary types of hardware logic components that can be used include Field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standards (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
Fig. 8 illustrates a block diagram of an electronic device 800 in which one or more embodiments of the disclosure may be implemented. It should be understood that the electronic device 800 illustrated in fig. 8 is merely exemplary and should not be construed as limiting the functionality and scope of the embodiments described herein. The electronic device 800 illustrated in fig. 8 may be used to implement the terminal device 110 and/or the server 130 of fig. 1.
As shown in fig. 8, the electronic device 800 is in the form of a general purpose computing device. Components of electronic device 800 may include, but are not limited to, one or more processors or processing units 810, memory 820, storage device 830, one or more communication units 840, one or more input devices 850, and one or more output devices 860. The processing unit 810 may be a real or virtual processor and is capable of performing various processes according to programs stored in the memory 820. In a multiprocessor system, multiple processing units execute computer-executable instructions in parallel to increase the parallel processing capabilities of electronic device 800.
Electronic device 800 typically includes multiple computer storage media. Such a medium may be any available medium that is accessible by electronic device 800 including, but not limited to, volatile and non-volatile media, removable and non-removable media. The memory 820 may be volatile memory (e.g., registers, cache, random Access Memory (RAM)), non-volatile memory (e.g., read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory), or some combination thereof. Storage device 830 may be a removable or non-removable medium and may include a machine-readable medium such as a flash drive, a magnetic disk, or any other medium that may be capable of storing information and/or data (e.g., training data for training) and that may be accessed within electronic device 800.
The electronic device 800 may further include additional removable/non-removable, volatile/nonvolatile storage media. Although not shown in fig. 8, a magnetic disk drive for reading from or writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk may be provided. In these cases, each drive may be connected to a bus (not shown) by one or more data medium interfaces. Memory 820 may include a computer program product 825 having one or more program modules configured to perform the various methods or acts of the various embodiments of the present disclosure.
The communication unit 840 enables communication with other electronic devices through a communication medium. Additionally, the functionality of the components of the electronic device 800 may be implemented in a single computing cluster or in multiple computing machines capable of communicating over a communications connection. Thus, the electronic device 800 may operate in a networked environment using logical connections to one or more other servers, a network Personal Computer (PC), or another network node.
The input device 850 may be one or more input devices such as a mouse, keyboard, trackball, etc. The output device 860 may be one or more output devices such as a display, speakers, printer, etc. The electronic device 800 may also communicate with one or more external devices (not shown), such as storage devices, display devices, etc., with one or more devices that enable a user to interact with the electronic device 800, or with any device (e.g., network card, modem, etc.) that enables the electronic device 800 to communicate with one or more other electronic devices, as desired, via the communication unit 840. Such communication may be performed via an input/output (I/O) interface (not shown).
According to an exemplary embodiment of the present disclosure, a computer-readable storage medium having stored thereon computer-executable instructions, wherein the computer-executable instructions are executed by a processor to implement the method described above, is provided. According to an exemplary embodiment of the present disclosure, there is also provided a computer program product tangibly stored on a non-transitory computer-readable medium and comprising computer-executable instructions that are executed by a processor to implement the method described above.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus, devices, and computer program products implemented according to the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various implementations of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of implementations of the present disclosure has been provided for illustrative purposes, is not exhaustive, and is not limited to the implementations disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various implementations described. The terminology used herein was chosen in order to best explain the principles of each implementation, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the various embodiments disclosed herein.

Claims (19)

1. A method for interface interaction, comprising:
presenting a play interface of video content;
in the playing interface, the commodity label associated with the commodity object is displayed in association with the commodity object in the video content; and
in response to selection of the merchandise tag, merchandise information associated with the target object is presented.
2. The method of claim 1, wherein the merchandise tag is generated based on an identification of the merchandise object in the video content.
3. The method of claim 1, wherein the merchandise tag is generated based on an editing operation of a publisher of the video content.
4. The method of claim 1, wherein the merchandise tag indicates merchandise description information associated with the merchandise object.
5. The method of claim 1, wherein a display position of the merchandise tag varies based on a position of the merchandise object in the video content.
6. The method of claim 5, wherein the playback interface further comprises a target control, wherein the display location of the merchandise tag does not overlap with the target control.
7. The method of claim 1, wherein the merchandise information comprises:
a commodity search result matched with the commodity object; and/or
Media content associated with the merchandise object.
8. The method of claim 1, wherein the video content comprises uploaded video content.
9. A method for editing video, comprising:
presenting an editing interface for a target video, wherein the editing interface comprises a commodity marking control;
determining a target commodity to be associated to the target video based on the selection of the commodity marking control; and
and adding a commodity label associated with the target commodity to the target video based on a label adding operation, wherein the label adding operation at least indicates the presentation position of the commodity label in the target video.
10. The method of claim 9, wherein determining a target commodity to be associated to the target video comprises:
presenting a set of candidate merchandise associated with the target video; and
a selection is received for the target commodity in the set of candidate commodities.
11. The method of claim 10, wherein the set of candidate merchandise is determined based on the target video.
12. The method of claim 9, wherein determining a target commodity to be associated to the target video comprises:
acquiring commodity search words based on first input of a user; and
and determining target commodities to be associated to the target video based on the commodity search words.
13. The method of claim 9, wherein the label adding operation further indicates a corresponding period of time for which the merchandise label is to be presented.
14. The method of claim 9, wherein the tag addition operation further indicates text content of the merchandise tag, the text content comprising:
a first text content determined based on the target commodity; and/or
And a second text content determined based on a second input by the user.
15. The method of claim 14, wherein the first text content indicates merchandise description information associated with the target merchandise.
16. An apparatus for interface interaction, comprising:
a playback interface presentation module configured to present a playback interface of video content;
the commodity label display module is configured to display commodity labels associated with commodity objects in the video content in association with the commodity objects in the playing interface; and
and a commodity information presentation module configured to present commodity information associated with the target object in response to a selection of the commodity label.
17. An apparatus for editing video, comprising:
an editing interface presentation module configured to present an editing interface for a target video, the editing interface comprising a merchandise marker control;
a target commodity determination module configured to determine a target commodity to be associated with the target video based on selection of the commodity marking control; and
and a commodity label adding module configured to add a commodity label associated with the target commodity to the target video based on a label adding operation, the label adding operation indicating at least a presentation position of the commodity label in the target video.
18. An electronic device, comprising:
at least one processing unit; and
At least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, which when executed by the at least one processing unit, cause the electronic device to perform the method of any one of claims 1 to 8 or 9 to 15.
19. A computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method according to any of claims 1 to 8 or 9 to 15.
CN202310771942.9A 2023-06-27 2023-06-27 Method, apparatus, device and storage medium for interface interaction and editing video Pending CN116668760A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310771942.9A CN116668760A (en) 2023-06-27 2023-06-27 Method, apparatus, device and storage medium for interface interaction and editing video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310771942.9A CN116668760A (en) 2023-06-27 2023-06-27 Method, apparatus, device and storage medium for interface interaction and editing video

Publications (1)

Publication Number Publication Date
CN116668760A true CN116668760A (en) 2023-08-29

Family

ID=87717121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310771942.9A Pending CN116668760A (en) 2023-06-27 2023-06-27 Method, apparatus, device and storage medium for interface interaction and editing video

Country Status (1)

Country Link
CN (1) CN116668760A (en)

Similar Documents

Publication Publication Date Title
US11372608B2 (en) Gallery of messages from individuals with a shared interest
KR102315474B1 (en) A computer-implemented method and non-transitory computer-readable storage medium for presentation of a content item synchronized with a media display
CN110378732B (en) Information display method, information association method, device, equipment and storage medium
CN112055225B (en) Live broadcast video interception, commodity information generation and object information generation methods and devices
US10678852B2 (en) Content reaction annotations
KR102033189B1 (en) Gesture-based tagging to view related content
US9123061B2 (en) System and method for personalized dynamic web content based on photographic data
CN108959558B (en) Information pushing method and device, computer equipment and storage medium
US9589296B1 (en) Managing information for items referenced in media content
CN110889076B (en) Comment information publishing method, device, client, server, system and medium
CN108573391B (en) Method, device and system for processing promotion content
US10091556B1 (en) Relating items to objects detected in media
WO2019183061A1 (en) Object identification in social media post
CN105611049A (en) Selectable styles for text messaging system publishers
US20170013309A1 (en) System and method for product placement
US20130100296A1 (en) Media content distribution
CN110781388A (en) Information recommendation method and device for image information
CN116668760A (en) Method, apparatus, device and storage medium for interface interaction and editing video
US11468675B1 (en) Techniques for identifying objects from video content
US11216867B2 (en) Arranging information describing items within a page maintained in an online system based on an interaction with a link to the page
US20240144677A1 (en) Mobile application camera activation and de-activation based on physical object location
US11586691B2 (en) Updating a profile of an online system user to include an affinity for an item based on an image of the item included in content received from the user and/or content with which the user interacted
US10810277B1 (en) System and method for determination of a digital destination based on a multi-part identifier
CN116756370A (en) Method, apparatus, device and storage medium for searching
CN116560531A (en) Commodity searching method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination