CN115334346A - Interface display method, video publishing method, video editing method and device - Google Patents

Interface display method, video publishing method, video editing method and device Download PDF

Info

Publication number
CN115334346A
CN115334346A CN202210945497.9A CN202210945497A CN115334346A CN 115334346 A CN115334346 A CN 115334346A CN 202210945497 A CN202210945497 A CN 202210945497A CN 115334346 A CN115334346 A CN 115334346A
Authority
CN
China
Prior art keywords
video
target
interface
target area
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210945497.9A
Other languages
Chinese (zh)
Inventor
夏磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202210945497.9A priority Critical patent/CN115334346A/en
Publication of CN115334346A publication Critical patent/CN115334346A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4728End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot

Abstract

The disclosure relates to an interface display method, a video publishing method, a video editing method and a device, and belongs to the technical field of internet. The method comprises the following steps: highlighting a target object in the video in a video playing interface; and responding to the touch operation of the target object, and displaying a detail interface corresponding to the target object, wherein the detail interface comprises object information corresponding to the target object. According to the scheme, when the video is played, the target object in the video is highlighted to attract the user to touch the target object, and when the touch operation on the target object is detected, the detail interface corresponding to the target object is displayed, so that the user can further know the target object through the detail interface.

Description

Interface display method, video publishing method, video editing method and device
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to an interface display method, a video publishing method, a video editing method, and an apparatus.
Background
With the rapid development of computer technology and mobile internet, various network information is widely spread, so that people can quickly and timely acquire various information, and great convenience is provided for life and work of people.
In the related art, when recommending an article for a user, a card of the article is generally displayed in a video playing interface, and article information corresponding to the article is displayed in response to a click operation on the card, so that the user can know the article.
Disclosure of Invention
The disclosure provides an interface display method, a video publishing method, a video editing method and a device, which can improve the display effect. The technical scheme of the disclosure is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided an interface display method, including:
highlighting a target object in a video playing interface;
and responding to the touch operation of the target object, and displaying a detail interface corresponding to the target object, wherein the detail interface comprises object information corresponding to the target object.
In some embodiments, the displaying, in response to the touch operation on the target item, a detail interface corresponding to the target item includes:
responding to the touch operation in the playing interface, and displaying a detail interface corresponding to the target object under the condition that the touch operation is located in the target area where the target object is located.
In some embodiments, before highlighting the target item in the video in the playing interface of the video, the method further includes:
acquiring video information, wherein the video information comprises the video and target area information, and the target area information represents a target area where the target object in the video is located;
determining the target region in the video based on the target region information.
In some embodiments, before highlighting the target item in the video in the playing interface of the video, the method further includes:
acquiring video information, wherein the video information comprises the video and a detail interface identifier, and the detail interface identifier represents the detail interface;
the interface for displaying the details corresponding to the target object comprises:
and displaying the detail interface based on the detail interface identifier.
In some embodiments, the highlighting the target item in the video in the playing interface of the video includes at least one of:
displaying the contour line of the target object on the contour line of the target object in the playing interface;
displaying a special effect in a target area where the target object is located in the playing interface;
displaying a special effect on the outline of the target object in the playing interface;
displaying a prompt mark pointing to a target area outside the target area where the target object is located in the playing interface;
and displaying a prompt text in the playing interface, wherein the prompt text is used for prompting the touch operation on the target object.
In some embodiments, the displaying, in response to the touch operation on the target item, a detail interface corresponding to the target item includes:
responding to the click operation of the target object, and displaying a detail interface corresponding to the target object; alternatively, the first and second electrodes may be,
responding to long-time pressing operation on the target object, and displaying a detail interface corresponding to the target object; alternatively, the first and second electrodes may be,
and responding to the sliding operation of the target object, and displaying a detail interface corresponding to the target object.
According to a second aspect of the embodiments of the present disclosure, there is provided a video distribution method, including:
determining target area information based on a video to be released, wherein the target area information represents a target area where a target object in the video is located;
determining a detail interface identifier of the video, wherein the detail interface identifier represents a detail interface corresponding to the target object;
and issuing video information, wherein the video information comprises the video, the target area information and the detail interface identification, and the video information indicates that the detail interface is displayed under the condition that the touch operation is detected in the target area.
In some embodiments, the determining target area information based on the video to be published includes:
determining the target area annotated in at least one video frame of the video based on an annotation operation detected in the at least one video frame;
determining the target area information based on a location of the target area in the at least one video frame.
In some embodiments, the determining the target region annotated in at least one video frame of the video based on the annotation operation detected in the at least one video frame comprises:
in response to a sliding track detected in at least one video frame of the video, determining an area surrounded by the sliding track as the target area.
In some embodiments, the determining target area information based on the video to be published includes:
acquiring a target article category corresponding to the video;
and identifying the video based on the target article type to obtain the target area information, wherein the target area information represents the area of the article belonging to the target article type in the video.
In some embodiments, the obtaining the target item category corresponding to the video includes:
acquiring a reference image comprising the target object, and identifying the reference image to obtain the category of the target object; alternatively, the first and second electrodes may be,
determining a selected item category in a plurality of preset item categories as the target item category; alternatively, the first and second electrodes may be,
and responding to a sliding track detected in any video frame of the video, and identifying an area enclosed by the sliding track to obtain the target object category.
In some embodiments, the determining target area information based on the video to be published includes:
and determining the target area information based on the selected video segments in the video.
In some embodiments, after determining the target area information based on the video to be published, the method further includes:
determining a highlighting style of the target region;
and editing the target area by adopting the highlighting style so as to display the target area in the video according to the highlighting style.
According to a third aspect of the embodiments of the present disclosure, there is provided a video editing method, including:
receiving a video identification request sent by a terminal, wherein the video identification request carries a video to be identified;
identifying the video based on the target article type corresponding to the video to obtain target area information, wherein the target area information represents an area where an article belonging to the target article type is located in the video;
and after the target area in the video is marked based on the target area information, sending the marked video to the terminal, or sending the target area information to the terminal.
In some embodiments, the method further comprises:
acquiring a reference image carried by the video identification request, and identifying the reference image to obtain the category of the target object; alternatively, the first and second liquid crystal display panels may be,
acquiring the target object category carried by the video identification request; alternatively, the first and second liquid crystal display panels may be,
and the video comprises a video frame marked with the target area, and the marked target area is identified to obtain the target article type.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an interface display apparatus, the apparatus including:
the video playing unit is configured to perform highlighting on a target object in a video in a playing interface of the video;
the detail interface display unit is configured to execute a touch operation on the target object, and display a detail interface corresponding to the target object, wherein the detail interface comprises object information corresponding to the target object.
In some embodiments, the detail interface display unit is configured to perform, in response to a touch operation in the play interface, a display of a detail interface corresponding to the target item when the touch operation is located in a target area where the target item is located.
In some embodiments, the apparatus further comprises:
an information acquisition unit configured to perform acquisition of video information, the video information including the video and target area information, the target area information indicating a target area in the video where the target item is located;
a region determining unit configured to perform determining the target region in the video based on the target region information.
In some embodiments, the apparatus further comprises:
an information acquisition unit configured to perform acquisition of video information, the video information including the video and a detail interface identifier, the detail interface identifier representing the detail interface;
the detail interface display unit is configured to display the detail interface based on the detail interface identifier.
In some embodiments, the video playback unit is configured to perform at least one of:
displaying the contour line of the target object on the contour line of the target object in the playing interface;
displaying a special effect in a target area where the target object is located in the playing interface;
displaying a special effect on the outline of the target object in the playing interface;
displaying a prompt mark pointing to the target area outside the target area where the target object is located in the playing interface;
and displaying a prompt text in the playing interface, wherein the prompt text is used for prompting the touch operation on the target object.
In some embodiments, the details interface display unit is configured to perform:
responding to the click operation of the target object, and displaying a detail interface corresponding to the target object; alternatively, the first and second electrodes may be,
responding to long-time pressing operation on the target object, and displaying a detail interface corresponding to the target object; alternatively, the first and second electrodes may be,
and responding to the sliding operation of the target object, and displaying a detail interface corresponding to the target object.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a video distribution apparatus including:
the information determining unit is configured to determine target area information based on a video to be published, wherein the target area information represents a target area where a target object in the video is located;
an identification determining unit configured to determine a detail interface identification of the video, wherein the detail interface identification represents a detail interface corresponding to the target item;
an information publishing unit configured to perform publishing video information, the video information including the video, the target area information, and the detail interface identification, the video information indicating that the detail interface is displayed if a touch operation is detected in the target area.
In some embodiments, the information determining unit includes:
a region determination subunit configured to perform a determination of the target region annotated in at least one video frame of the video based on an annotation operation detected in the at least one video frame;
an information determination subunit configured to perform determining the target area information based on a position of the target area in the at least one video frame.
In some embodiments, the region determining subunit is configured to perform:
in response to a sliding track detected in at least one video frame of the video, determining an area surrounded by the sliding track as the target area.
In some embodiments, the information determining unit includes:
the category acquisition subunit is configured to execute acquisition of a target item category corresponding to the video;
and the information determining subunit is configured to perform identification on the video based on the target item category to obtain the target area information, wherein the target area information represents an area where an item belonging to the target item category is located in the video.
In some embodiments, the category obtaining subunit is configured to perform:
acquiring a reference image comprising the target object, and identifying the reference image to obtain the category of the target object; alternatively, the first and second electrodes may be,
determining a selected item category in a plurality of preset item categories as the target item category; alternatively, the first and second electrodes may be,
and responding to a sliding track detected in any video frame of the video, and identifying an area enclosed by the sliding track to obtain the target object category.
In some embodiments, the information determining unit is configured to perform determining the target area information based on a selected video segment of the video.
In some embodiments, the apparatus further comprises:
an editing unit configured to perform determining a highlighting style of the target region; and editing the target area by adopting the highlighting style so as to display the target area in the video according to the highlighting style.
According to a sixth aspect of the embodiments of the present disclosure, there is provided a video editing apparatus comprising:
the request receiving unit is configured to execute a video identification request sent by a receiving terminal, and the video identification request carries a video to be identified;
the area identification unit is configured to identify the video based on a target object type corresponding to the video to obtain target area information, and the target area information represents an area where an object belonging to the target object type in the video is located;
an information sending unit configured to send the marked video to the terminal or send the target area information to the terminal after marking the target area in the video based on the target area information.
In some embodiments, the apparatus further comprises a category acquisition unit configured to perform:
acquiring a reference image carried by the video identification request, and identifying the reference image to obtain the category of the target object; alternatively, the first and second electrodes may be,
acquiring the target article category carried by the video identification request; alternatively, the first and second electrodes may be,
and the video comprises a video frame marked with the target area, and the marked target area is identified to obtain the category of the target object.
According to a seventh aspect of an embodiment of the present disclosure, there is provided a terminal, including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the interface display method according to the first aspect or the video distribution method according to the second aspect.
According to an eighth aspect of embodiments of the present disclosure, there is provided a server including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video editing method as described in the third aspect above.
According to a ninth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions, when executed by a processor, implement the interface display method according to the first aspect or the video distribution method according to the second aspect or the video editing method according to the third aspect.
According to a tenth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the interface display method according to the first aspect, the video distribution method according to the second aspect, or the video editing method according to the third aspect.
The embodiment of the disclosure provides an interface display scheme, which provides a new man-machine interaction mode, wherein when a video is played, a target object in the video is highlighted to attract a user to touch the target object, and when touch operation on the target object is detected, a detail interface corresponding to the target object is displayed, so that the user can further know the target object through the detail interface.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a schematic diagram of an implementation environment, shown in accordance with an exemplary embodiment;
FIG. 2 is a flow diagram illustrating a method of video distribution in accordance with an exemplary embodiment;
FIG. 3 is a flow diagram illustrating another video distribution method in accordance with an illustrative embodiment;
FIG. 4 is a schematic diagram illustrating a video editing interface in accordance with an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating another video editing interface in accordance with an illustrative embodiment;
FIG. 6 is a schematic diagram illustrating a video publication interface in accordance with an illustrative embodiment;
FIG. 7 is a flow diagram illustrating a method of video editing in accordance with an exemplary embodiment;
FIG. 8 is a flow diagram illustrating another method of video editing in accordance with an illustrative embodiment;
FIG. 9 is a flowchart illustrating a method of displaying an interface in accordance with an exemplary embodiment;
FIG. 10 is a flow chart illustrating another method of interface display in accordance with an exemplary embodiment;
FIG. 11 is a schematic diagram illustrating a playback interface in accordance with an illustrative embodiment;
FIG. 12 is a schematic diagram illustrating another playback interface in accordance with an illustrative embodiment;
FIG. 13 is a schematic illustration of yet another playback interface shown in accordance with an exemplary embodiment;
FIG. 14 is a schematic diagram of yet another playback interface shown in accordance with an illustrative embodiment;
FIG. 15 is a schematic diagram illustrating yet another playback interface in accordance with an illustrative embodiment;
FIG. 16 is a block diagram illustrating the structure of an interface display apparatus according to an exemplary embodiment;
fig. 17 is a block diagram illustrating a configuration of a video distribution apparatus according to an exemplary embodiment;
fig. 18 is a block diagram showing a configuration of a video editing apparatus according to an exemplary embodiment;
FIG. 19 is a block diagram illustrating the structure of a terminal according to one exemplary embodiment;
FIG. 20 is a block diagram illustrating a configuration of a server in accordance with an illustrative embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the disclosure, as detailed in the appended claims.
The user information to which the present disclosure relates may be information authorized by the user or sufficiently authorized by each party.
FIG. 1 is a schematic diagram of an implementation environment, shown in accordance with an exemplary embodiment. Referring to fig. 1, the embodiment includes a first terminal 101, a server 102, and a second terminal 103. The first terminal 101, the second terminal 103 and the server 102 are connected through a wireless or wired network. Illustratively, the first terminal 101 and the second terminal 103 are laptops, mobile phones, tablet computers or other terminals. Illustratively, the server 102 is a background server of an application or a cloud server providing services such as cloud computing and cloud storage.
The first terminal 101 is a terminal used by a user who provides an item, and the first terminal 101 is a terminal that distributes a video including the item provided by the user. The second terminal 103 is a terminal that plays a video. In some embodiments, the first terminal 101 and the second terminal 103 have installed thereon a target application served by the server 102, through which the first terminal 101 publishes a video, and through which the second terminal 103 plays the video. The application has the functions of video playing, video publishing, video editing and the like. For example, the target application is a video application.
In the embodiment of the present disclosure, the first terminal 101 edits a video to be distributed, uploads the edited video to the server 102, and the server 102 distributes the video, so that the second terminal 103 obtains the video distributed by the server 102 and plays the video for the user to watch.
After the implementation environment of the embodiment of the present disclosure is described, an application scenario of the embodiment of the present disclosure will be described below with reference to the implementation environment.
For example, a user who provides an article uploads a video to be published by using a first terminal, the video includes the article, the first terminal edits the video to be published by using the video editing method provided by the embodiment of the disclosure to obtain an edited video, then the edited video is published by using a server, the second terminal obtains the published video and plays the video for the user to watch, and in the process of playing the video, the second terminal displays a detailed interface of the article by using the interface display method provided by the embodiment of the disclosure, so that the user can further know the article.
The method provided by the embodiment of the present disclosure can also be applied in other scenarios, which is not limited by the embodiment of the present disclosure.
Fig. 2 is a flowchart illustrating a video distribution method according to an exemplary embodiment, which is performed by the first terminal, as shown in fig. 2, and includes the following steps.
In step 201, the first terminal determines target area information based on a video to be published, where the target area information indicates a target area where a target article in the video is located.
The video to be released is the video uploaded by the account logged in by the first terminal. In some embodiments, the first terminal is installed with a target application having video editing, video publishing, and the like functions. When a user wants to release a video, the user triggers the first terminal to run the target application, the target application logs in an account, and the video to be released is uploaded in the target application through the account.
The account refers to a user using the first terminal, and the account is used for distinguishing different users. The account number is associated with at least one article, and other account numbers except the account number can trade with the account number for the article. At least one video frame of the video includes a target item, that is, the picture content of the at least one video frame includes the target item, and the target item is any item associated with the account, for example, the target item is an item such as a mobile phone, a hat, or clothing. In the disclosed embodiment, a video includes a target item, the video is a video recommending the target item, and the target item is a recommended item.
In the embodiment of the disclosure, after acquiring a video to be published, a first terminal determines target area information of the video. The target area where the target object is located is represented in the form of target area information, for example, the target area information is a coordinate area where the target object is located in the video.
In step 202, the first terminal determines a detail interface identifier of the video, where the detail interface identifier represents a detail interface corresponding to the target item.
Wherein the detail interface identifier comprises a link of the detail interface of the target item or other identifier capable of representing the detail interface. The details interface for the target item includes details of the target item describing the target item and displaying an entry for trading the target item. The content included in the detail interface may be set according to needs, which is not limited in the embodiments of the present disclosure, for example, the detail interface includes information such as a name, a price, a quantity, an image, or a transaction entry of the target item.
In step 203, the first terminal issues video information, where the video information includes a video, target area information, and a detail interface identifier, and the video information indicates that the detail interface is displayed when a touch operation is detected in the target area.
After the first terminal issues the video information, other terminals play the video after acquiring the video information, and when the video is played, the detail interface of the target object is displayed based on the detail interface identifier under the condition that the touch operation on the target area where the target object is located in the video is detected, so that a user watching the video can know the target object.
In the embodiment of the disclosure, based on a video to be published, target area information capable of representing a target area where a target object in the video is located is determined, and then a detail interface identifier representing a detail interface of the target object is acquired, so that sufficient data support is provided for publishing the video, a terminal of the acquired video information can display the video based on the target area information, and the detail interface of the target object can be displayed based on the detail interface identifier, thereby realizing recommendation of the target object, and providing a new man-machine interaction mode for a user watching the video.
Fig. 3 is a flow chart illustrating another video distribution method according to an exemplary embodiment, as shown in fig. 3, performed by a first terminal, including the following steps.
In step 301, a first terminal obtains a video to be distributed, where the video includes a target item.
The video to be published is any video uploaded by the account logged in by the first terminal. In some embodiments, the first terminal is installed with a target application having video editing, video publishing, and the like functions. When a user wants to release a video, the user triggers the first terminal to operate the target application, the target application logs in an account, and the video to be released is uploaded in the target application through the account.
The account refers to a user using the first terminal, and the account is used for distinguishing different users. The account number is provided with at least one item with which account numbers other than the account number can transact with the item. At least one video frame of the video includes a target item, that is, the picture content of the at least one video frame includes the target item, and the target item is any item provided by the account, for example, the target item is an item such as a mobile phone, a hat, or clothes. In embodiments of the present disclosure, a video includes a target item, the video is a video recommending the target item, and the target item is a recommended item.
In step 302, the first terminal determines target area information based on the video, where the target area information indicates a target area where a target item in the video is located.
In the embodiment of the disclosure, after acquiring a video to be published, a first terminal determines target area information of the video. The target area where the target object is located is represented in the form of target area information, for example, the target area information is a coordinate area where the target object is located in the video.
The method comprises the steps that a first terminal displays a video and an editing control after the video to be published is acquired, the editing control is used for editing the video, correspondingly, a user triggers the editing control, the first terminal responds to the fact that the editing control is triggered and displays a video editing interface, the video editing interface comprises a first area, and the video is displayed in the first area.
In some embodiments, the method for determining the target area information by the first terminal includes that the first terminal marks an area where a target object in a video is located in a manual marking mode to obtain the target area information, and accordingly, the implementation mode for determining the target area information by the first terminal based on the video to be published includes: determining a target area marked in at least one video frame of the video based on the marking operation detected in the at least one video frame; target area information is determined based on a position of the target area in the at least one video frame.
The video editing interface further comprises a second area, the second area displays a plurality of video frames included in the video, the video frames are sequentially spliced in the video according to the sequence in the video, a separation line is further displayed on the left edge of the first video frame in the video frames, and the position of the separation line is fixed. The user may slide the plurality of video frames to one side of the second area, the first terminal displays an effect of sliding the plurality of video frames to one side in response to a sliding operation on the plurality of video frames, and displays the video frames located on the separation line in the first area. When the user wants to mark on which video frame, the user slides the video frame to stay on the separation line, and the user can mark the video frame displayed in the first area, so that the first terminal detects a marking operation in the video frame and determines the marked area as the target area. By analogy, the user marks at least one video frame respectively, and the first terminal determines a target area in each marked video frame.
For each video frame in at least one video frame, the position of the target area in the video frame may be represented by coordinates of pixel points included in the target area, and the target area information includes coordinates of pixel points on an outline of the target area or coordinates of all pixel points included in the target area. The coordinate system may be set as needed, which is not limited in the embodiments of the present disclosure.
For example, referring to fig. 4, the video editing interface 401 includes a first area 402 and a second area 403, the second area 403 displays a plurality of video frames included in a video and a separation line 404, the video frame currently located on the separation line 404 is a second-ordered video frame, the first area 402 displays a video frame of the video frame, and a user can mark the video frame displayed in the first area 402.
The embodiment of the disclosure provides a relatively flexible method for determining target area information, when a user wants to recommend a target object, at least one video frame in a video can be labeled according to the requirement of the user, so that when a labeling operation on the video frame is detected, a labeled area is likely to be an area where the target object that the user wants to recommend is located, therefore, the target area information determined based on the position of the labeled area can represent the target area where the target object is located, the flexibility is relatively high, the determined target area information better meets the requirement of the user, and the user experience is relatively good.
In a possible implementation manner of this embodiment, the annotating operation is a sliding operation, and accordingly, determining the annotated target area in at least one video frame of the video based on the annotation operation detected in the at least one video frame includes: in response to a sliding track detected in at least one video frame of the video, determining an area surrounded by the sliding track as a target area. Wherein the first terminal may or may not keep the slide track in the video.
For example, referring to fig. 4, the video frame displayed in the first area 402 includes the target object 405, and the user can slide along the contour of the target object 405 until the sliding track surrounds the target object.
In the embodiment of the disclosure, for at least one video frame of a video, when a sliding track is detected in the video frame, a region surrounded by the sliding track is likely to be a region where a target article that a user wants to recommend is located, and then the region can be determined as a target region, a human-computer interaction mode is simple and convenient, the determined target region meets the user requirements, and the accuracy is high.
In other implementations, the marking operation may also be a smearing operation, and accordingly, in response to a smearing operation detected in at least one video frame of the video, a smeared area is determined as a target area. In this implementation, the user may paint the area where the target item is located, so that when the first terminal detects the painting operation, the painted area is likely to be the area where the target item recommended by the user is located, and then the area may be determined as the target area.
In other embodiments, the first terminal automatically identifies the video to obtain the target area information, so that user operation is reduced, and the determination efficiency is improved. Correspondingly, the implementation mode of the first terminal for determining the target area information based on the video to be released comprises the following steps: acquiring a target object category corresponding to the video; and identifying the video based on the target object category to obtain target area information, wherein the target area information represents an area where the object belonging to the target object category is located in the video.
The object type is an object type to which the object belongs, and the object type may be set according to needs, for example, an object type such as a mobile phone, a hat, or a glove. Optionally, the video editing interface further includes an identification control, and in response to the identification control being triggered, the first terminal acquires the target item category, so that the video is identified based on the target item category to obtain target area information. The display mode of the identification control may be set as needed, which is not limited in the embodiment of the present disclosure.
After the target object category is obtained, the first terminal identifies each video frame in the video to obtain an identification result of each video frame, and the identification result represents an area where an object belonging to the target object category is located in the video frame. The target area information includes a recognition result corresponding to each video frame. In some embodiments, the first terminal identifies the video through an identification model, which may be a machine learning model or other identification model.
For example, referring to fig. 4, the video editing interface 401 also includes an identification control "identify" that the user can trigger to trigger the first terminal to identify the video.
In the embodiment of the disclosure, the video is identified based on the target item category corresponding to the video, so that the area where the item belonging to the target item category is located can be automatically identified, and since the item belonging to the target item category is likely to be the target item that the user wants to recommend, the area is likely to be the target area, and the accuracy of the target area information is high. Compared with the manual labeling mode provided by the embodiment, the automatic identification mode simplifies the user operation and has higher determination efficiency.
In this embodiment, the process of the first terminal acquiring the target item category corresponding to the video includes any one of the following implementation manners:
the first implementation mode comprises the following steps: the first terminal obtains a reference image including the target object, and identifies the reference image to obtain the category of the target object. Wherein, the reference image is an image uploaded by an account. Optionally, the video editing interface further includes an identification control, the first terminal displays an image uploading control in response to a trigger operation on the identification control, the user triggers the image uploading control, and the first terminal acquires an uploaded reference image in response to the image uploading control being triggered. The display mode of the image upload control may be set as required, which is not limited in the embodiment of the present disclosure.
In the implementation mode, the reference image comprises the target object, so that the reference image is used as a reference to identify the reference image, the identified object type is also the object type to which the target object belongs, the target object type is obtained, and the determined target object type is accurate.
The second implementation mode comprises the following steps: and determining the selected item category in the plurality of preset item categories as a target item category. The plurality of preset item categories may be set as needed, which is not limited in the embodiment of the disclosure, for example, the plurality of preset item categories include item categories such as a mobile phone, a wallet, a computer, a mirror, or a cup. Optionally, the video editing interface further includes an identification control, the first terminal displays a category selection control in response to a trigger operation on the identification control, and displays a category selection interface in response to a trigger operation on the category selection control, where the category selection interface includes a plurality of preset item categories from which a user can select any item category, so that the first terminal determines the selected item category as the target item category. The display mode of the category selection control may be set as needed, which is not limited in the embodiment of the present disclosure.
In this implementation, the selected item category is likely to be the item category to which the target item belongs, and the selected item category can be directly determined as the target item category, so that the operation is simple and convenient.
The third implementation mode comprises the following steps: and responding to the sliding track detected in any video frame of the video, and identifying the area enclosed by the sliding track to obtain the target object category. Optionally, the video editing interface further comprises an identification control, the first terminal responds to a trigger operation on the identification control, displays a user-defined area control, responds to a trigger operation on the user-defined area control, displays a user-defined interface, the user-defined interface comprises a video, and detects a sliding track in any video frame of the video. The display mode of the user-defined region control can be set as required, which is not limited in the embodiment of the disclosure. The display mode of the custom interface and the display mode of the video editing interface are the same, and are not described herein again. The implementation manner is the same as the manual labeling manner provided in the above embodiment, and is not described herein again.
In this implementation manner, the area surrounded by the sliding track is likely to be the area where the target item is located, and the item type obtained through identification is likely to be the item type to which the target item belongs by identifying the area, so that the target item type is obtained, and the determined target item type is more accurate.
For example, referring to fig. 4, a user triggers an identification control "identification" in the video editing interface 401, and displays a selection card 406, where the selection card 406 includes an image upload control "upload image", a category selection control "select category", and a user-defined area control "user-defined", and if the user triggers the image upload control, the image upload interface is displayed, and the user can upload an image in the image upload interface; if the user triggers the category selection control, displaying a category selection interface, wherein the category selection interface comprises a plurality of preset article categories, and the user can select any one of the preset categories in the category selection interface; and if the user triggers the custom area control, displaying a custom interface, wherein the custom interface comprises a video, and the user can label the video in the custom interface.
In this embodiment, for example, the first terminal identifies the video, and in this other embodiment, the first terminal identifies the video by using the server to obtain the target area information, refer to the following embodiments shown in fig. 7 and 8, which will not be described again here.
In some embodiments, if it is possible that not every video frame in the video includes the target item, the target area information does not need to be determined based on the complete video, and accordingly, the implementation manner of determining the target area information based on the video to be published includes: and determining target area information based on the selected video segments in the video. The video editing interface comprises a first area and a second area, the first area displays the video, the second area displays a plurality of video frames included in the video, and the video frames are sequentially spliced in the video. The second area also displays a first separation line and a second separation line, and the first separation line and the second separation line are separated by a certain distance. The user can slide the plurality of video frames to one side of the second area, the first terminal responds to the sliding operation of the plurality of video frames and displays the effect that the plurality of video frames slide to one side, the user wants which video frame to be the starting video frame of the video clip to slide the video frame to stay on the first separation line, and the user wants which video frame to be the ending video frame of the video clip to slide the video frame to stay on the second separation line.
For example, referring to fig. 5, a first area 502 of the video editing interface 501 displays a video frame, a second area 503 displays a plurality of video frames included in a video, a first partition line 504 and a second partition line 505, the video frame currently located on the first partition line 504 is a video frame ranked first, the video frame located on the second partition line 505 is a video frame ranked fourth, and the selected video clip includes a plurality of video frames including the first video frame to the fourth video frame.
In the embodiment of the disclosure, one video clip is selected from the video, so that the target area information is determined only based on the selected video clip, and the workload is reduced.
In the embodiment of the present disclosure, after step 301, the first terminal edits the determined target area, and accordingly, the first terminal continues to perform the operations of steps 303 to 304.
In step 303, the first terminal determines a highlighting style of the target region.
Optionally, after the target region information is determined, the first terminal displays a plurality of preset highlighting styles, the user selects any one of the preset highlighting styles, and the first terminal determines the selected preset highlighting style as the highlighting style of the target region. The plurality of preset highlighting patterns may be set as needed, which is not limited in the embodiments of the present disclosure.
In step 304, the first terminal edits the target region using the highlight style, so that the target region is displayed in the video according to the highlight style.
The first terminal edits a target area in the video by adopting a highlight style, and optionally, the first terminal displays the edited video in the first area, so that a user can conveniently view an edited effect.
In the embodiment of the disclosure, the target area is edited by adopting the highlight pattern, so that the released video is the video added with the highlight pattern, support is provided for highlight display of the target object during subsequent video playing, and due to the fact that the target area is edited before the video is released, other terminals acquiring the video do not need to edit the video, and preparation operation for playing the video is simplified.
In other embodiments, after determining the target area information, the first terminal does not perform steps 303 to 304, but performs step 305, that is, before playing the video, other terminals that have acquired the video determine the highlight pattern of the target area, and edit the target area by using the highlight pattern. For example, the terminal may determine a highlight style corresponding to the account according to the currently logged-in account, so as to implement personalized display.
In step 305, the first terminal determines a detail interface identifier of the video, where the detail interface identifier represents a detail interface corresponding to the target item.
Wherein the detail interface identifier comprises a link of the detail interface of the target item or other identifier capable of representing the detail interface. The details interface for the target item includes details of the target item describing the target item and displaying a portal for transacting the target item. The content included in the detail interface may be set as required, which is not limited in the embodiments of the present disclosure, for example, the detail interface includes information such as a name, a price, a stock quantity, an image or a transaction entrance of the target item.
In some embodiments, the video editing interface further includes an editing completion control, and in response to a triggering operation on the editing completion control, the first terminal displays a video publishing interface, where the video publishing interface includes an identifier adding control, and the identifier adding control is used to trigger and display at least one item associated with an account currently logged in by the first terminal. The user triggers the identifier adding control, the first terminal responds to the triggered identifier adding control, at least one item associated with the account is displayed, the user selects one item from the item, and the first terminal determines the selected item as a target item so as to obtain the detail interface identifier corresponding to the target item. Correspondingly, the first terminal stores the detail interface identification corresponding to at least one article, and the first terminal obtains the detail interface identification corresponding to the target article from the detail interface identification.
For example, referring to fig. 4, the video editing interface 401 further includes an editing completion control "complete", and after editing a video, the user triggers the editing completion control, and accordingly, the first terminal displays a video publishing interface, referring to the video publishing interface 601 shown in fig. 6, where the video publishing interface 601 includes an identifier adding control "add", and when the user triggers the identifier adding control, the first terminal displays at least one item (taking 3 as an example) associated with the currently logged-in account: item 1, item 2, and item 3, and if the user selects item 2, the first terminal determines item 2 as the target item.
In step 306, the first terminal issues video information, where the video information includes a video, target area information, and a detail interface identifier, and the video information indicates that the detail interface is displayed when a touch operation is detected in the target area.
After the target area information and the detailed interface identification of the video are determined, the first terminal issues the video information, so that other terminals can acquire the video information, and the video is played based on the video information. The implementation mode of the first terminal for releasing the video information comprises the following steps: the first terminal sends video information to the server, and the server receives the video information and distributes the video information. In some embodiments, the video publishing interface further comprises a publishing control triggered by the user when the user wants to publish the video, and the first terminal publishes the video information in response to the publishing control being triggered.
After the first terminal issues the video information, other terminals play the video after acquiring the video information, and when the video is played, the detail interface of the target object is displayed based on the detail interface identifier under the condition that the touch operation on the target area where the target object in the video is located is detected, so that a user watching the video can know the target object.
In the embodiment of the disclosure, based on a video to be published, target area information capable of representing a target area where a target object in the video is located is determined, and then a detail interface identifier representing a detail interface of the target object is acquired, so that sufficient data support is provided for publishing the video, a terminal of the acquired video information can display the video based on the target area information, and the detail interface of the target object can be displayed based on the detail interface identifier, thereby realizing recommendation of the target object, and providing a new man-machine interaction mode for a user watching the video.
In some embodiments shown in fig. 3, the first terminal may identify the video based on the target item class when determining the target area information, and in other embodiments, the first terminal identifies the video based on the target item class by means of the server, so as to save the computing resources of the first terminal and reduce power consumption. Accordingly, the video editing process of the server is explained below.
Fig. 7 is a flowchart illustrating a video editing method, as shown in fig. 7, performed by a server, including the following steps, according to an example embodiment.
In step 701, a server receives a video identification request sent by a terminal, where the video identification request carries a video to be identified.
The terminal is a first terminal for issuing videos. In some embodiments, after acquiring a video to be published, the first terminal sends a video identification request to the server, where the video identification request is used to request the server to identify the video.
In step 702, the server identifies the video based on the target item type corresponding to the video to obtain target area information, where the target area information indicates an area where an item belonging to the target item type in the video is located.
The server determines the target object type first and then identifies the video based on the target object type.
In step 703, after labeling the target area in the video based on the target area information, the server sends the labeled video to the terminal, or sends the target area information to the terminal.
The server can directly send the target area information to the first terminal, and after the first terminal receives the target area information and marks the target area in the video based on the target area information, the marked video is displayed, so that a user can conveniently check the marking effect. The server can also mark a target area in the video, and then send the marked video to the first terminal, so that the first terminal displays the marked video, the video does not need to be marked, and the operation of the first terminal is simplified.
In the embodiment of the disclosure, the server identifies the video by sending the video identification request to the server to obtain the target area information or the marked video, so that the computing resource of the terminal issuing the video is saved.
Fig. 8 is a flowchart illustrating another video editing method, as illustrated in fig. 8, implemented by an interaction between a server and a first terminal, according to an exemplary embodiment, including the following steps.
In step 801, a first terminal acquires a video to be identified.
The implementation manner of this step is referred to as the implementation manner of step 301, and is not described herein again.
In step 802, a first terminal sends a video identification request to a server, where the video identification request carries a video.
The first terminal displays the video and an editing control after acquiring the video, the editing control is used for editing the video, correspondingly, a user triggers the editing control, the first terminal responds to the triggering of the editing control and displays a video editing interface, the video editing interface comprises the video and an identification control, and responds to the triggering of the identification control, and the first terminal sends the video identification request to the server.
In step 803, the server receives a video identification request transmitted by the first terminal.
In step 804, the server identifies the video based on the target item category corresponding to the video to obtain target area information, where the target area information indicates an area where an item belonging to the target item category is located in the video.
The server determines the target object type first and then identifies the video based on the target object type.
In some embodiments, the process of the server obtaining the target item category includes any one of the following implementation manners:
the first implementation mode comprises the following steps: and the server acquires the reference image carried by the video identification request, and identifies the reference image to obtain the category of the target object. The reference image is an image uploaded by an account logged in by the first terminal, and the reference image comprises a target object. Optionally, the process of acquiring the reference image by the first terminal refers to the embodiment shown in fig. 3, and is not described herein again.
In the implementation mode, the first terminal sends the video and the reference image to the server together, the reference image comprises the target object, therefore, the server identifies the reference image by taking the reference image as a reference, the identified object type is also the object type to which the target object belongs, the target object type is obtained, and the determined target object type is accurate.
The second implementation mode comprises the following steps: and the server acquires the target item category carried by the video identification request. And if the video identification request carries the target item type determined by the first terminal, the server directly acquires the target item type. Optionally, the process of the first terminal acquiring the target item category refers to the embodiment shown in fig. 3, and is not described herein again.
In the implementation mode, the video identification request carries the target object type determined by the first terminal, and the target object type is directly acquired, so that the operation is simple and convenient.
The third implementation mode comprises the following steps: the video comprises a video frame of the marked target area, and the server identifies the marked target area to obtain the target object type. The user can mark one video frame in the video to obtain the video frame marked with the target area, so that the first terminal sends the video containing the video frame to the server, and the server can use the video frame marked with the target area as a reference. Optionally, the process of the first terminal determining the video frame of the labeled target area refers to the embodiment shown in fig. 3, and is not described herein again.
In the implementation mode, the video sent to the server by the first terminal comprises the video frame of the labeled target area, so that the server can identify the labeled target area in the video frame by referring to the video frame, and the identified object type is the object type to which the target object belongs, so that the target object type is obtained, and the determined target object type is more accurate.
In step 805, the server transmits target area information to the first terminal.
In step 806, the first terminal receives the target area information.
In step 807, the first terminal determines a detail interface identifier of the video, where the detail interface identifier represents a detail interface corresponding to the target item.
In step 808, the first terminal issues video information, where the video information includes a video, target area information, and a detail interface identifier, and the video information indicates that the detail interface is displayed when a touch operation is detected in the target area.
The implementation of steps 807 to 808 refer to the implementation of steps 305 to 306, and will not be described herein.
In the embodiment of the present disclosure, the server transmits the target area information to the first terminal after determining the target area information. In other embodiments, step 805 may be replaced with: and after the server marks the target area in the video based on the target area information, the server sends the marked video to the terminal. And the server marks the corresponding position in the video frame to obtain a marked video. Step 806 is replaced with: and the first terminal receives the marked video. Therefore, the first terminal displays the marked video without marking the video, and the operation of the first terminal is simplified.
In the embodiment of the disclosure, the server identifies the video by sending the video identification request to the server to obtain the target area information or the marked video, so that the computing resource of the terminal issuing the video is saved.
The above embodiment describes the video editing method provided in the embodiment of the present disclosure, and after the first terminal issues the video, other terminals may acquire the issued video and play the video. The following describes an interface display process when playing a video by using the interface display method provided by the embodiment of the present disclosure.
Fig. 9 is a flowchart illustrating an interface display method according to an exemplary embodiment, which is performed by the second terminal, as shown in fig. 9, and includes the following steps.
In step 901, the second terminal highlights a target item in the video in a playing interface of the video.
The target item is any item associated with the account issuing the video, for example, the target item is an item such as a mobile phone, a hat, or clothes. In the disclosed embodiment, a video includes a target item, the video is a video recommending the target item, and the target item is a recommended item. When the second terminal plays the video, the target object in the video is highlighted, so that the attention of the user is attracted.
In some embodiments, the second terminal is installed with a target application, the target application has a video playing function, and when a user wants to watch a video, the user triggers the second terminal to run the target application, so that the second terminal plays the video in the target application.
In step 902, the second terminal displays a detail interface corresponding to the target item in response to the touch operation on the target item, where the detail interface includes item information corresponding to the target item.
When the user wants to further know the target object, the user can touch the target object, so that the second terminal is triggered to display a detail interface of the target object. The details interface for the target item includes details of the target item describing the target item and displaying an entry for trading the target item. The content included in the detail interface may be set according to needs, but the embodiment of the present disclosure is not limited to this, for example, the detail interface includes information such as the name of the target item, the price, the inventory amount, the image or the transaction entrance.
The embodiment of the disclosure provides an interface display scheme, which provides a new man-machine interaction mode, wherein when a video is played, a target object in the video is highlighted to attract a user to touch the target object, and when touch operation on the target object is detected, a detail interface corresponding to the target object is displayed, so that the user can further know the target object through the detail interface.
Fig. 10 is a flowchart illustrating an interface display method according to an exemplary embodiment, which is performed by the second terminal, as shown in fig. 10, and includes the following steps.
In step 1001, the second terminal highlights a target item in the video in a playing interface of the video.
The target item is any item associated with the account issuing the video, for example, the target item is an item such as a mobile phone, a hat, or clothes. In the disclosed embodiment, a video includes a target item, the video is a video recommending the target item, and the target item is a recommended item. And when the second terminal plays the video, highlighting the target object in the video so as to attract the attention of the user.
In some embodiments, the second terminal is installed with a target application, the target application has a video playing function, and when a user wants to watch a video, the user triggers the second terminal to run the target application, so that the second terminal plays the video in the target application.
In some embodiments, the video included in the video information issued by the first terminal is a video edited by adopting a highlight style, and the second terminal plays the edited video to realize highlighting of the target object in the video. In other embodiments, the video included in the video information issued by the first terminal is a video that is not edited by using the highlight pattern, and before the second terminal plays the video, the second terminal needs to edit the target region in the video by using the highlight pattern to obtain an edited video. The highlight pattern can be a highlight pattern corresponding to the account number, so that personalized display is achieved.
In some embodiments, the process of the second terminal determining the target area in the video comprises: the second terminal acquires video information, wherein the video information comprises a video and target area information, and the target area information represents a target area where a target object in the video is located; based on the target area information, a target area is determined in the video.
After the second terminal acquires the video information, the video information is stored, so that the subsequent acquisition is facilitated.
In the embodiment of the disclosure, since the video information includes the target area information, the target area in the video, that is, the position of the target object, can be determined by the target area information, so that the second terminal can edit the target area by using the highlight pattern, so that the target area is displayed in the video according to the highlight pattern, and the highlight of the target object is further realized.
In some embodiments, the second terminal highlights the target item in the video in the playing interface of the video in an implementation manner that includes at least one of the following:
the first implementation mode comprises the following steps: and the second terminal displays the contour line of the target object on the contour line of the target object in the playing interface. The display style of the contour line may be set as required, which is not limited in the embodiment of the present disclosure, for example, the contour line is a black solid line. For example, referring to the playback interface 1101 shown in fig. 11, a contour line (indicated by a dotted line) is displayed on the contour of the target item 1102.
The display mode is simple, and the influence of the highlight effect on video playing can be reduced.
In a second implementation manner, a special effect is displayed in a target area where a target object is located in a playing interface. The display style of the special effect can be set according to needs, which is not limited in the embodiment of the disclosure, but it should be noted that the special effect is a transparent special effect to reduce the shielding of the target object, so that the user can see both the target object and the special effect. For example, referring to the playback interface 1201 shown in fig. 12, the target area in which the target item 1202 is located is shown with special effects (shaded).
The display mode is visual, the highlighted area is large, and the highlighted effect is strong.
In a third implementation manner, a special effect is displayed on the contour of the target object in the playing interface. The display style of the special effect may be set as needed, which is not limited in the embodiment of the present disclosure, for example, the special effect is highlight and aperture. For example, referring to the playback interface 1301 shown in fig. 13, the outline of the target item 1302 is shown with a light ring effect.
The display mode displays a special effect on the outline, so that the target object is not shielded, and a strong highlighting effect is achieved.
In a fourth implementation manner, a prompt mark pointing to a target area is displayed outside the target area where the target object is located in the playing interface. The display style of the cue marker may be set as needed, which is not limited in the embodiment of the present disclosure, for example, the cue marker is an arrow marker. For example, referring to a play interface 1401 shown in fig. 14, a hint mark 1403 is displayed around a target item 1402.
The display mode guides the user through the prompt mark on the basis of not changing the original display mode of the target object, so that the user can understand conveniently.
In a fifth implementation manner, a prompt text is displayed in the play interface, and the prompt text is used for prompting the touch operation on the target object. The display style of the prompt text may be set as required, which is not limited in the embodiments of the present disclosure, for example, the prompt text is "click to know details", and the prompt text may be displayed in a position near the target object. For example, referring to the play interface 1501 shown in fig. 15, the play interface 1501 is displayed with the target item 1502 and the prompt text "click to know details".
The display mode guides the user through the prompt text suction on the basis of not changing the original display mode of the target object, and is convenient for the user to understand.
The embodiment of the disclosure provides various modes for highlighting the target object, and the display styles are various.
In step 1002, the second terminal responds to the touch operation in the play interface, and displays a detail interface corresponding to the target item when the touch operation is located in a target area where the target item is located, where the detail interface includes item information corresponding to the target item.
When a user watching the video sees the highlighted target object, the user may want to know the target object, so as to touch the target object, and accordingly, the second terminal displays the detail interface. In the embodiment of the present disclosure, in a case that the touch operation is not located in the target area, which indicates that the touch operation is not used for triggering the display of the detail interface, the second terminal may perform other processing based on the touch operation, for example, pause playing of a video or display of a next video.
In the embodiment of the disclosure, considering that the video may include other video contents besides the target object, when the touch operation is detected, it is determined whether the touch operation is located in the target area, and the detail interface is displayed only when the touch operation is located in the target area, so that the display of the detail interface due to the mistaken touch is prevented, and the display effect better meets the requirements of the user.
In some embodiments, the implementation manner of the second terminal determining the target area in the video is referred to step 1001, and is not described herein again.
In some embodiments, the touch operation is in various forms, and accordingly, in response to the touch operation on the target item, the implementation manner of displaying the detail interface corresponding to the target item includes any one of the following: responding to the click operation of the target object, and displaying a detail interface corresponding to the target object; or responding to long-time pressing operation on the target object, and displaying a detail interface corresponding to the target object; or displaying a detail interface corresponding to the target object in response to the sliding operation of the target object.
The click operation is a preset number of click operations, and the preset number may be set according to needs, which is not limited in the embodiment of the present disclosure, for example, the preset number is 1, 2, and the like. The long press operation is a long press operation with a preset duration, which may be set as needed, and this is not limited in this disclosure, for example, the preset duration is 0.5 second, 1 second, and the like. The sliding operation is an operation of sliding the target object in accordance with the target movement trajectory. The target moving track may be set as required, which is not limited in the embodiments of the present disclosure, for example, the target track is, etc.
In the embodiment of the disclosure, a user can click, long press or slide a target object in a video, so that the second terminal is triggered to display a detailed interface of the target object, and the touch operation is diverse and convenient to operate.
In some embodiments, the video information acquired by the second terminal further includes a detail interface identifier, and the detail interface identifier represents a detail interface; then, the implementation manner of the second terminal displaying the detail interface corresponding to the target item includes: and the second terminal displays the detail interface based on the detail interface identifier. Wherein the detail interface identifier comprises a link of the detail interface of the target item or other identifier capable of representing the detail interface.
In the embodiment of the disclosure, since the video information further includes the detail interface identifier, when the touch operation on the target object is detected, the detail interface can be displayed based on the detail interface identifier, and data support is provided for the display of the detail interface.
The embodiment of the disclosure provides an interface display scheme, which provides a new man-machine interaction mode, wherein when a video is played, a target object in the video is highlighted to attract a user to touch the target object, and when touch operation on the target object is detected, a detail interface corresponding to the target object is displayed, so that the user further knows the target object through the detail interface.
Fig. 16 is a block diagram illustrating a structure of an interface display apparatus according to an exemplary embodiment. Referring to fig. 16, the apparatus includes:
the video playing unit 1601 is configured to perform highlighting of a target item in a video in a playing interface of the video;
a detail interface display unit 1602, configured to perform, in response to a touch operation on the target item, displaying a detail interface corresponding to the target item, where the detail interface includes item information corresponding to the target item.
In some embodiments, the detail interface display unit 1602 is configured to perform, in response to the touch operation in the play interface, in a case where the touch operation is located in a target area where the target item is located, displaying a detail interface corresponding to the target item.
In some embodiments, the apparatus further comprises:
the information acquisition unit is configured to execute video information acquisition, wherein the video information comprises a video and target area information, and the target area information represents a target area where a target object in the video is located;
a region determining unit configured to perform determining a target region in the video based on the target region information.
In some embodiments, the apparatus further comprises:
the information acquisition unit is configured to acquire video information, wherein the video information comprises a video and a detail interface identifier, and the detail interface identifier represents a detail interface;
a detail interface display unit 1602 configured to perform displaying the detail interface based on the detail interface identification.
In some embodiments, video playback unit 1601 is configured to perform at least one of:
displaying the contour line of the target object on the contour line of the target object in the playing interface;
displaying a special effect in a target area where a target object is located in a playing interface;
displaying a special effect on the outline of the target object in the playing interface;
displaying a prompt mark pointing to a target area outside the target area where a target object is located in a playing interface;
and displaying a prompt text in the playing interface, wherein the prompt text is used for prompting the touch operation on the target object.
In some embodiments, the details interface display unit 1602 is configured to perform:
responding to the clicking operation of the target object, and displaying a detail interface corresponding to the target object; alternatively, the first and second liquid crystal display panels may be,
responding to long-time pressing operation on the target object, and displaying a detail interface corresponding to the target object; alternatively, the first and second electrodes may be,
and displaying a detail interface corresponding to the target object in response to the sliding operation of the target object.
The embodiment of the disclosure provides an interface display device, which provides a new man-machine interaction mode, and when a video is played, a target object in the video is highlighted to attract a user to touch the target object, and when touch operation on the target object is detected, a detail interface corresponding to the target object is displayed, so that the user further knows the target object through the detail interface.
Fig. 17 is a block diagram illustrating a configuration of a video distribution apparatus according to an exemplary embodiment. Referring to fig. 17, the apparatus includes:
an information determination unit 1701 configured to perform determining target area information indicating a target area in the video where a target item is located, based on the video to be distributed;
an identification determining unit 1702 configured to perform determining a detail interface identification of the video, where the detail interface identification represents a detail interface corresponding to the target item;
and an information publishing unit 1703 configured to perform publishing of video information, the video information including a video, target area information, and a detail interface identification, the video information indicating that the detail interface is displayed in a case where a touch operation is detected in the target area.
In some embodiments, the information determination unit 1701 includes:
a region determination subunit configured to perform a target region to be tagged in at least one video frame of the video based on the tagging operation detected in the at least one video frame;
an information determination subunit configured to perform determining target area information based on a position of the target area in the at least one video frame.
In some embodiments, the region determining subunit is configured to perform:
in response to a sliding track detected in at least one video frame of the video, determining an area surrounded by the sliding track as a target area.
In some embodiments, the information determination unit 1701 includes:
the category acquisition subunit is configured to execute acquisition of a target item category corresponding to the video;
and the information determining subunit is configured to perform identification on the video based on the target article type to obtain target area information, and the target area information represents an area where an article belonging to the target article type is located in the video.
In some embodiments, the category acquisition subunit is configured to perform:
acquiring a reference image including a target object, and identifying the reference image to obtain the category of the target object; alternatively, the first and second electrodes may be,
determining a selected item category in a plurality of preset item categories as a target item category; alternatively, the first and second electrodes may be,
and responding to the sliding track detected in any video frame of the video, and identifying the area enclosed by the sliding track to obtain the target object category.
In some embodiments, the information determining unit 1701 is configured to determine the target area information based on a selected video segment in the video.
In some embodiments, the apparatus further comprises:
an editing unit configured to perform determining a highlighting style of the target region; and editing the target area by adopting the highlighting style so that the target area is displayed in the video according to the highlighting style.
In the embodiment of the disclosure, based on a video to be published, target area information capable of representing a target area where a target object in the video is located is determined, and then a detail interface identifier representing a detail interface of the target object is acquired, so that sufficient data support is provided for publishing the video, a terminal of the acquired video information can display the video based on the target area information, and the detail interface of the target object can be displayed based on the detail interface identifier, thereby realizing recommendation of the target object, and providing a new man-machine interaction mode for a user watching the video.
Fig. 18 is a block diagram showing a configuration of a video editing apparatus according to an exemplary embodiment. Referring to fig. 18, the apparatus includes:
a request receiving unit 1801, configured to execute a video identification request sent by a receiving terminal, where the video identification request carries a video to be identified;
an area identification unit 1802 configured to perform identification on a video based on a target object category corresponding to the video, to obtain target area information, where the target area information indicates an area where an object belonging to the target object category is located in the video;
an information sending unit 1803, configured to send the marked video to the terminal or send the target area information to the terminal after marking the target area in the video based on the target area information is executed.
In some embodiments, the apparatus further comprises a category acquisition unit configured to perform:
acquiring a reference image carried by the video identification request, and identifying the reference image to obtain the category of the target object; alternatively, the first and second electrodes may be,
acquiring the type of a target object carried by a video identification request; alternatively, the first and second liquid crystal display panels may be,
and the video comprises a video frame of the marked target area, and the marked target area is identified to obtain the target object type.
In the embodiment of the disclosure, the server identifies the video by sending the video identification request to the server to obtain the target area information or the marked video, so that the computing resource of the terminal issuing the video is saved.
With regard to the interface display device in the above-described embodiment, the specific manner in which each unit performs the operation has been described in detail in the embodiment of the related method, and will not be explained in detail here.
Fig. 19 is a block diagram illustrating a structure of a terminal according to an exemplary embodiment. In some embodiments, terminal 1900 includes: desktop computers, notebook computers, tablet computers, smart phones or other terminals, and the like. Terminal 1900 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so on.
Generally, terminal 1900 includes: a processor 1901 and a memory 1902.
In some embodiments, the processor 1901 includes one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. In some embodiments, the processor 1901 is implemented in hardware using at least one of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). In some embodiments, the processor 1901 also includes a main processor and a coprocessor, the main processor being a processor for Processing data in the wake state, also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1901 is integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 1901 further includes an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
In some embodiments, memory 1902 includes one or more computer-readable storage media that are non-transitory. In some embodiments, memory 1902 also includes high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1902 is used to store executable instructions for execution by the processor 1901 to implement the interface display method or video distribution method provided by the method embodiments of the present disclosure.
In some embodiments, terminal 1900 may further optionally include: a peripheral device interface 1903 and at least one peripheral device. In some embodiments, processor 1901, memory 1902, and peripherals interface 1903 are connected via buses or signal lines. In some embodiments, various peripheral devices are connected to peripheral interface 1903 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1904, a display screen 1905, a camera assembly 1906, an audio circuit 1907, a positioning assembly 1908, and a power supply 1909.
The peripheral interface 1903 may be used to connect at least one peripheral associated with an I/O (Input/Output) to the processor 1901 and the memory 1902. In some embodiments, the processor 1901, memory 1902, and peripherals interface 1903 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 1901, the memory 1902 and the peripheral interface 1903 are implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1904 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1904 communicates with a communication network and other communication devices via electromagnetic signals. The rf circuit 1904 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. In some embodiments, the radio frequency circuitry 1904 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. In some embodiments, the radio frequency circuitry 1904 communicates with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1904 further includes NFC (Near Field Communication) related circuits, which are not limited by this disclosure.
The display screen 1905 is used to display a UI (User Interface). In some embodiments, the UI includes graphics, text, icons, video, and any combination thereof. When the display screen 1905 is a touch display screen, the display screen 1905 also has the ability to capture touch signals on or above the surface of the display screen 1905. In some embodiments, the touch signal is input to the processor 1901 as a control signal for processing. At this point, the display 1905 is also used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1905 is one and is disposed on a front panel of terminal 1900; in other embodiments, there are at least two display screens 1905, each disposed on a different surface of terminal 1900 or in a folded design; in other embodiments, display 1905 is a flexible display disposed on a curved surface or on a folded surface of terminal 1900. Even more, the display 1905 is arranged in a non-rectangular irregular figure, i.e., a shaped screen. In some embodiments, the Display 1905 is made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1906 is used to capture images or video. In some embodiments, camera assembly 1906 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each of the rear cameras is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and a VR (Virtual Reality) shooting function or other fusion shooting functions. In some embodiments, camera head assembly 1906 also includes a flash. In some embodiments, the flash is a single color temperature flash, and in some embodiments, the flash is a dual color temperature flash. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp and is used for light compensation under different color temperatures.
In some embodiments, the audio circuitry 1907 includes a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals into the processor 1901 for processing or inputting the electric signals into the radio frequency circuit 1904 to achieve voice communication. In some embodiments, multiple microphones are provided, each at a different location on terminal 1900, for stereo sound capture or noise reduction purposes. In some embodiments, the microphone is an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1901 or the radio frequency circuitry 1904 into sound waves. In some embodiments, the speaker is a conventional membrane speaker, and in some embodiments, the speaker is a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to human, but also the electric signal can be converted into a sound wave inaudible to human for use in distance measurement or the like. In some embodiments, the audio circuitry 1907 also includes a headphone jack.
The positioning component 1908 is configured to locate a current geographic Location of the terminal 1900 for navigation or LBS (Location Based Service). In some embodiments, the Positioning component 1907 is a Positioning component based on the united states GPS (Global Positioning System), the chinese beidou System, the russian glonass Positioning System, or the european union galileo System.
Power supply 1909 is used to provide power to the various components in terminal 1900. In some embodiments, power source 1909 is alternating current, direct current, a disposable battery, or a rechargeable battery. When the power supply 1909 includes a rechargeable battery, the rechargeable battery is a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery is also used to support fast charge technology.
In some embodiments, terminal 1900 also includes one or more sensors 1910. The one or more sensors 1910 include, but are not limited to: acceleration sensor 1911, gyro sensor 1912, pressure sensor 1913, optical sensor 1914, and proximity sensor 1915.
In some embodiments, acceleration sensor 1911 detects acceleration in three coordinate axes of a coordinate system established with terminal 1900. For example, the acceleration sensor 1911 is used to detect components of the gravitational acceleration in three coordinate axes. In some embodiments, the processor 1901 controls the display screen 1905 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1911. In some embodiments, the acceleration sensor 1911 is also used for collection of motion data of a game or user.
In some embodiments, gyroscope sensor 1912 detects the body orientation and rotation angle of terminal 1900, and gyroscope sensor 1912 cooperates with acceleration sensor 1911 to acquire the 3D movement of terminal 1900 by the user. The processor 1901 can implement the following functions according to the data collected by the gyro sensor 1912: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization while shooting, game control, and inertial navigation.
In some embodiments, pressure sensors 1913 are disposed on side frames of terminal 1900 and/or underlying display 1905. When the pressure sensor 1913 is provided on the side frame of the terminal 1900, the user can detect a grip signal of the terminal 1900, and the processor 1901 can perform right-left hand recognition or shortcut operation based on the grip signal acquired by the pressure sensor 1913. When the pressure sensor 1913 is disposed at the lower layer of the display screen 1905, the processor 1901 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1905. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The optical sensor 1914 is used to collect the ambient light intensity. In one embodiment, the processor 1901 controls the display brightness of the display screen 1905 based on the ambient light intensity collected by the optical sensor 1914. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1905 is increased; when the ambient light intensity is low, the display brightness of the display screen 1905 is adjusted down. In another embodiment, the processor 1901 also dynamically adjusts the shooting parameters of the camera assembly 1906 based on the intensity of ambient light collected by the optical sensor 1914.
Proximity sensor 1915, also referred to as a distance sensor, is typically disposed on the front panel of terminal 1900. Proximity sensor 1915 is used to capture the distance between the user and the front face of terminal 1900. In one embodiment, when proximity sensor 1915 detects that the distance between the user and the front surface of terminal 1900 is gradually decreasing, display 1905 is controlled by processor 1901 to switch from the bright screen state to the mute screen state; when proximity sensor 1915 detects that the distance between the user and the front surface of terminal 1900 gradually becomes larger, processor 1901 controls display 1905 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 19 is not limiting of terminal 1900 and can include more or fewer components than shown, or combine certain components, or employ a different arrangement of components.
Fig. 20 is a block diagram illustrating a structure of a server according to an exemplary embodiment, where the server 2000 may have a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 2001 and one or more memories 2002, where the memory 2002 stores at least one executable instruction, and the at least one executable instruction is loaded and executed by the processor 2001 to implement the video editing method provided by the foregoing method embodiment. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
In an exemplary embodiment, there is also provided a computer readable storage medium, such as a memory, comprising instructions executable by a processor to perform the interface display method or the video distribution method or the video editing method in the above method embodiments. In some embodiments, the computer-readable storage medium may be a ROM (Read-Only Memory), a RAM (Random Access Memory), a CD-ROM (Compact disk Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, which includes a computer program that, when executed by a processor, implements the interface display method or the video distribution method or the video editing method in the above-described method embodiments.
In some embodiments, a computer program according to embodiments of the present disclosure may be deployed to be executed on one electronic device or on a plurality of electronic devices located at one site, or on a plurality of electronic devices distributed at a plurality of sites and interconnected by a communication network, and the plurality of electronic devices distributed at the plurality of sites and interconnected by the communication network may constitute a block chain system. The electronic device may be provided as a terminal or a server.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (15)

1. An interface display method, comprising:
highlighting a target object in a video playing interface;
responding to touch operation on the target object, and displaying a detail interface corresponding to the target object, wherein the detail interface comprises object information corresponding to the target object.
2. The method according to claim 1, wherein the displaying a detail interface corresponding to the target item in response to the touch operation on the target item comprises:
responding to the touch operation in the playing interface, and displaying a detail interface corresponding to the target object under the condition that the touch operation is located in a target area where the target object is located.
3. The method according to claim 1, wherein before highlighting the target item in the video in the playing interface of the video, the method further comprises:
acquiring video information, wherein the video information comprises the video and a detail interface identifier, and the detail interface identifier represents the detail interface;
the interface for displaying the details corresponding to the target object comprises:
and displaying the detail interface based on the detail interface identification.
4. The method according to claim 1, wherein the highlighting the target item in the video in the playing interface of the video comprises at least one of the following:
displaying the contour line of the target object on the contour line of the target object in the playing interface;
displaying a special effect in a target area where the target object is located in the playing interface;
displaying a special effect on the outline of the target object in the playing interface;
displaying a prompt mark pointing to the target area outside the target area where the target object is located in the playing interface;
and displaying a prompt text in the playing interface, wherein the prompt text is used for prompting the touch operation on the target object.
5. A method for video distribution, comprising:
determining target area information based on a video to be released, wherein the target area information represents a target area where a target object in the video is located;
determining a detail interface identifier of the video, wherein the detail interface identifier represents a detail interface corresponding to the target object;
and issuing video information, wherein the video information comprises the video, the target area information and the detail interface identification, and the video information indicates that the detail interface is displayed under the condition that the touch operation is detected in the target area.
6. The method according to claim 5, wherein the determining target area information based on the video to be distributed comprises:
determining the target area annotated in at least one video frame of the video based on an annotation operation detected in the at least one video frame;
determining the target area information based on a location of the target area in the at least one video frame.
7. The method of claim 6, wherein the determining the target region labeled in at least one video frame of the video based on the labeling operation detected in the at least one video frame comprises:
in response to a sliding track detected in at least one video frame of the video, determining an area surrounded by the sliding track as the target area.
8. A video editing method, comprising:
receiving a video identification request sent by a terminal, wherein the video identification request carries a video to be identified;
identifying the video based on the target article type corresponding to the video to obtain target area information, wherein the target area information represents an area where an article belonging to the target article type is located in the video;
and after the target area in the video is marked based on the target area information, sending the marked video to the terminal, or sending the target area information to the terminal.
9. An interface display apparatus, the apparatus comprising:
the video playing unit is configured to perform highlighting on a target object in a video in a playing interface of the video;
a detail interface display unit configured to perform a touch operation on the target item in response to the touch operation, and display a detail interface corresponding to the target item, where the detail interface includes item information corresponding to the target item.
10. A video distribution apparatus, characterized in that the apparatus comprises:
the information determining unit is configured to determine target area information based on a video to be published, wherein the target area information represents a target area where a target object in the video is located;
an identification determining unit configured to perform determination of a detail interface identification of the video, where the detail interface identification represents a detail interface corresponding to the target item;
an information publishing unit configured to perform publishing video information, the video information including the video, the target area information, and the detail interface identification, the video information indicating that the detail interface is displayed if a touch operation is detected in the target area.
11. A video editing apparatus, characterized in that the apparatus comprises:
the request receiving unit is configured to execute a video identification request sent by a receiving terminal, and the video identification request carries a video to be identified;
the area identification unit is configured to identify the video based on a target object type corresponding to the video to obtain target area information, and the target area information represents an area where an object belonging to the target object type in the video is located;
an information sending unit configured to send the marked video to the terminal or send the target area information to the terminal after the target area in the video is marked based on the target area information.
12. A terminal, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the interface display method of any one of claims 1 to 4, or the processor is configured to execute the instructions to implement the video distribution method of any one of claims 5 to 7.
13. A server, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video editing method of claim 8.
14. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor, implement the interface display method of any one of claims 1 to 4, or the video distribution method of any one of claims 5 to 7, or the video editing method of claim 8.
15. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the interface display method of any one of claims 1 to 4, or the video distribution method of any one of claims 5 to 7, or the video editing method of claim 8.
CN202210945497.9A 2022-08-08 2022-08-08 Interface display method, video publishing method, video editing method and device Pending CN115334346A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210945497.9A CN115334346A (en) 2022-08-08 2022-08-08 Interface display method, video publishing method, video editing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210945497.9A CN115334346A (en) 2022-08-08 2022-08-08 Interface display method, video publishing method, video editing method and device

Publications (1)

Publication Number Publication Date
CN115334346A true CN115334346A (en) 2022-11-11

Family

ID=83922783

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210945497.9A Pending CN115334346A (en) 2022-08-08 2022-08-08 Interface display method, video publishing method, video editing method and device

Country Status (1)

Country Link
CN (1) CN115334346A (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101699863A (en) * 2009-10-29 2010-04-28 孙剑 Method for delivering advertisement in video
CN103402130A (en) * 2013-07-24 2013-11-20 Tcl集团股份有限公司 Method and system for displaying advertisement in video
CN105916050A (en) * 2016-05-03 2016-08-31 乐视控股(北京)有限公司 TV shopping information processing method and device
CN107995516A (en) * 2017-11-21 2018-05-04 霓螺(宁波)信息技术有限公司 The methods of exhibiting and device of article in a kind of interdynamic video
CN110213307A (en) * 2018-02-28 2019-09-06 腾讯科技(深圳)有限公司 Multi-medium data method for pushing, device, storage medium and equipment
CN110225387A (en) * 2019-05-20 2019-09-10 北京奇艺世纪科技有限公司 A kind of information search method, device and electronic equipment
CN110909616A (en) * 2019-10-28 2020-03-24 北京奇艺世纪科技有限公司 Method and device for acquiring commodity purchase information in video and electronic equipment
CN111314759A (en) * 2020-03-02 2020-06-19 腾讯科技(深圳)有限公司 Video processing method and device, electronic equipment and storage medium
CN111859158A (en) * 2020-08-05 2020-10-30 上海连尚网络科技有限公司 Information pushing method, video processing method and equipment
CN112055179A (en) * 2020-09-11 2020-12-08 苏州科达科技股份有限公司 Video playing method and device
CN112929687A (en) * 2021-02-05 2021-06-08 腾竞体育文化发展(上海)有限公司 Interaction method, device and equipment based on live video and storage medium
CN113129045A (en) * 2019-12-31 2021-07-16 阿里巴巴集团控股有限公司 Video data processing method, video data display method, video data processing device, video data display device, electronic equipment and storage medium
CN113344663A (en) * 2021-05-31 2021-09-03 北京达佳互联信息技术有限公司 Article information display method and device
CN113468374A (en) * 2021-05-31 2021-10-01 北京达佳互联信息技术有限公司 Target display method and device, electronic equipment and storage medium
CN113610600A (en) * 2021-08-06 2021-11-05 上海哔哩哔哩科技有限公司 Method and device for displaying detailed information of commodities
CN113760158A (en) * 2021-04-30 2021-12-07 腾讯科技(深圳)有限公司 Target object display method, object association method, device, medium and equipment
WO2022037307A1 (en) * 2020-08-18 2022-02-24 北京达佳互联信息技术有限公司 Information recommendation method and apparatus, and electronic device

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101699863A (en) * 2009-10-29 2010-04-28 孙剑 Method for delivering advertisement in video
CN103402130A (en) * 2013-07-24 2013-11-20 Tcl集团股份有限公司 Method and system for displaying advertisement in video
CN105916050A (en) * 2016-05-03 2016-08-31 乐视控股(北京)有限公司 TV shopping information processing method and device
CN107995516A (en) * 2017-11-21 2018-05-04 霓螺(宁波)信息技术有限公司 The methods of exhibiting and device of article in a kind of interdynamic video
CN110213307A (en) * 2018-02-28 2019-09-06 腾讯科技(深圳)有限公司 Multi-medium data method for pushing, device, storage medium and equipment
CN110225387A (en) * 2019-05-20 2019-09-10 北京奇艺世纪科技有限公司 A kind of information search method, device and electronic equipment
CN110909616A (en) * 2019-10-28 2020-03-24 北京奇艺世纪科技有限公司 Method and device for acquiring commodity purchase information in video and electronic equipment
CN113129045A (en) * 2019-12-31 2021-07-16 阿里巴巴集团控股有限公司 Video data processing method, video data display method, video data processing device, video data display device, electronic equipment and storage medium
CN111314759A (en) * 2020-03-02 2020-06-19 腾讯科技(深圳)有限公司 Video processing method and device, electronic equipment and storage medium
CN111859158A (en) * 2020-08-05 2020-10-30 上海连尚网络科技有限公司 Information pushing method, video processing method and equipment
WO2022037307A1 (en) * 2020-08-18 2022-02-24 北京达佳互联信息技术有限公司 Information recommendation method and apparatus, and electronic device
CN112055179A (en) * 2020-09-11 2020-12-08 苏州科达科技股份有限公司 Video playing method and device
CN112929687A (en) * 2021-02-05 2021-06-08 腾竞体育文化发展(上海)有限公司 Interaction method, device and equipment based on live video and storage medium
CN113760158A (en) * 2021-04-30 2021-12-07 腾讯科技(深圳)有限公司 Target object display method, object association method, device, medium and equipment
CN113344663A (en) * 2021-05-31 2021-09-03 北京达佳互联信息技术有限公司 Article information display method and device
CN113468374A (en) * 2021-05-31 2021-10-01 北京达佳互联信息技术有限公司 Target display method and device, electronic equipment and storage medium
CN113610600A (en) * 2021-08-06 2021-11-05 上海哔哩哔哩科技有限公司 Method and device for displaying detailed information of commodities

Similar Documents

Publication Publication Date Title
CN107885533B (en) Method and device for managing component codes
CN112162671B (en) Live broadcast data processing method and device, electronic equipment and storage medium
CN109618212B (en) Information display method, device, terminal and storage medium
CN113411680B (en) Multimedia resource playing method, device, terminal and storage medium
CN109922356B (en) Video recommendation method and device and computer-readable storage medium
CN111880888B (en) Preview cover generation method and device, electronic equipment and storage medium
CN113157172A (en) Barrage information display method, transmission method, device, terminal and storage medium
CN109618192B (en) Method, device, system and storage medium for playing video
CN111836069A (en) Virtual gift presenting method, device, terminal, server and storage medium
CN111083526B (en) Video transition method and device, computer equipment and storage medium
CN110209316B (en) Category label display method, device, terminal and storage medium
CN109547847B (en) Method and device for adding video information and computer readable storage medium
CN111437600A (en) Plot showing method, plot showing device, plot showing equipment and storage medium
CN113936699B (en) Audio processing method, device, equipment and storage medium
CN113469779A (en) Information display method and device
CN113190307A (en) Control adding method, device, equipment and storage medium
CN112069350A (en) Song recommendation method, device, equipment and computer storage medium
CN111796990A (en) Resource display method, device, terminal and storage medium
CN111327819A (en) Method, device, electronic equipment and medium for selecting image
CN112004134A (en) Multimedia data display method, device, equipment and storage medium
CN112230910A (en) Page generation method, device, equipment and storage medium of embedded program
CN115134316B (en) Topic display method, device, terminal and storage medium
CN113051485B (en) Group searching method, device, terminal and storage medium
CN114186083A (en) Information display method, device, terminal, server and storage medium
CN113485596A (en) Virtual model processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination