CN116634215A - Display method and device - Google Patents

Display method and device Download PDF

Info

Publication number
CN116634215A
CN116634215A CN202310562327.7A CN202310562327A CN116634215A CN 116634215 A CN116634215 A CN 116634215A CN 202310562327 A CN202310562327 A CN 202310562327A CN 116634215 A CN116634215 A CN 116634215A
Authority
CN
China
Prior art keywords
video
keyword
input
text
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310562327.7A
Other languages
Chinese (zh)
Inventor
芮元乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202310562327.7A priority Critical patent/CN116634215A/en
Publication of CN116634215A publication Critical patent/CN116634215A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • G06V20/47Detecting features for summarising video content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer

Abstract

The application discloses a display method and a display device, and belongs to the technical field of communication. The method comprises the following steps: acquiring at least one keyword corresponding to a first video; displaying text playing content in a preset mode in a display area corresponding to the video cover image under the condition of displaying the video cover image of the first video; wherein the text play content is generated based on the at least one content keyword.

Description

Display method and device
Technical Field
The application belongs to the technical field of communication, and particularly relates to a display method and device.
Background
With the widespread use of multimedia technology, users can record daily life by recording videos using cameras of electronic devices, or users can browse videos published by others through a social platform. To make it easier for the user to understand the video content, a cover map is typically provided for the video.
In the related art, when a video is displayed to a user through an electronic device, the first video frame of the video or a video frame designated by the user is usually used as a cover map of the video, but the existing display mode cannot intuitively display main content of the video, so that video content information acquired by the user through the cover map of the video is limited, and therefore, the flexibility of video content display is poor.
Disclosure of Invention
The embodiment of the application aims to provide a display method, which can enable a user to timely and effectively acquire main content of a video when watching a video cover image, thereby improving the efficiency of video content display.
In a first aspect, an embodiment of the present application provides a display method, including: acquiring at least one keyword corresponding to a first video; displaying text playing content in a preset mode in a display area corresponding to the video cover image under the condition of displaying the video cover image of the first video; wherein the text play content is generated based on the at least one content keyword.
In a second aspect, an embodiment of the present application provides a display apparatus, including: the device comprises an acquisition module and a display module, wherein: the acquisition module is used for acquiring at least one keyword corresponding to the first video; the display module is used for displaying text playing content in a preset mode in a display area corresponding to a video cover image of the first video under the condition that the video cover image of the first video is displayed; wherein the text play content is generated based on at least one content keyword.
In a third aspect, an embodiment of the present application provides an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, the display device acquires at least one keyword corresponding to the first video, and under the condition that the video cover image of the first video is displayed, text playing content is displayed in a preset mode in a display area corresponding to the video cover image, wherein the text playing content is generated based on the at least one content keyword. According to the method, the display device displays at least one keyword corresponding to the video on the video cover image of the video, so that a user can visually check the keyword of the video on the video cover image and quickly acquire the main content of the video according to the keyword, and the user does not need to acquire the main content of the video after playing the video. Therefore, a user can timely and effectively know the main content of the video when watching the video cover image, and the efficiency of video content display is improved.
Drawings
FIG. 1 is a flow chart of a display method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a video content display interface according to an embodiment of the present application;
FIG. 3 is a second schematic diagram of a video content display interface according to an embodiment of the present application;
FIG. 4 (a) is a third schematic diagram of a video content display interface according to an embodiment of the present application;
FIG. 4 (b) is a schematic diagram of a video content display interface according to an embodiment of the present application;
FIG. 5 (a) is a schematic diagram of a video content display interface according to an embodiment of the present application;
FIG. 5 (b) is a schematic diagram of a video content display interface according to an embodiment of the present application;
FIG. 6 (a) is a schematic diagram of a video content display interface according to an embodiment of the present application;
FIG. 6 (b) is a schematic diagram of a video content display interface according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a display device according to an embodiment of the present application;
FIG. 8 is a second schematic diagram of a display device according to an embodiment of the application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 10 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The display scene provided by the embodiment of the application can be used for displaying the scene of the video shot by the camera of the electronic equipment.
In some embodiments of the present application, after a user uses a camera of an electronic device to photograph a scene in front of the user and photographs a section of a video of a marine sunrise, the electronic device performs content recognition on the video, extracts content keywords such as "sea", "sunrise", "ship", and "autumn" from the video, and when the user subsequently views the photographed video on an album interface, the electronic device sequentially plays the content keywords such as "sea", "sunrise", and "autumn" in a slide form on a cover map of the video in the album interface. Therefore, the user can quickly acquire the main content of the video by superposing the displayed content keywords on the cover map under the condition that only the cover map of the video is displayed and the video is not played, so that the efficient preview of the video is realized.
In other embodiments of the present application, after a user uses a camera of an electronic device to photograph a scene in front of the user's eyes and obtain a section of a video of a marine sunrise, the electronic device performs content recognition on the video, extracts content keywords such as "sea", "sunrise", and "autumn" from the video, and generates a title of the video based on the content keywords by using a natural language processing technology, for example, "marine sunrise in autumn", and circularly scrolls and plays the generated title on a cover map of the video. Thus, the user can quickly acquire the main content of the video by superposing the displayed video title on the cover chart under the condition that only the cover chart of the video is displayed and the video is not played, so that the efficient preview of the video is realized.
The execution main body of the display method provided by the embodiment of the application can be electronic equipment, or can be a functional module and/or a functional entity which can realize the control method in the electronic equipment, and the execution main body can be specifically determined according to actual use requirements.
The display method provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Fig. 1 is a flowchart of a display method according to an embodiment of the present application, as shown in fig. 1, the display method may include the following steps S201 and S202:
step S201: and acquiring at least one keyword corresponding to the first video.
Optionally, in the embodiment of the present application, the first video may be a video obtained by shooting through a camera of the electronic device, or a video obtained by recording a screen of a screen display content of the electronic device, or a video published on a social platform, which is not limited in the embodiment of the present application.
Alternatively, in the embodiment of the present application, the keywords may include keywords in four dimensions of time, place, scene, and object.
It will be appreciated that the keywords described above are related to the video content of the first video, which may embody the video content of the first video.
Alternatively, the time may be a video capturing time or a video distribution time, the location may be a video capturing location or a video distribution location, and the scene may be a video capturing scene. By way of example, the above-mentioned scene may be a travel scene, an office learning scene, a home living scene, or the like.
Optionally, in the embodiment of the present application, the display device may use an image recognition technology to perform content recognition on the first video, and extract at least one keyword from the first video.
For example, the display device may identify image content in a video frame of the first video using an image identification technique, and determine a keyword corresponding to the image frame based on the identified image content.
Illustratively, taking the first video as an example of the landscape video, in the case where the sea, the sunset, and the ship are identified to be included in the video picture of the landscape video, keywords of the landscape video are determined to include "sea", "sunset", and "ship". Further exemplary, in the case where "· temple", river, lake, and cemetery are included in the video frame of the landscape video, it is determined that the keywords of the landscape video include "· temple", "river", "lake", and "cemetery".
It should be noted that, the implementation process of extracting the keywords in the video by using the image recognition technology may refer to related technology, which is not described herein.
Optionally, in the embodiment of the present application, the display device may also determine the keywords corresponding to the first video according to the recording time, place, and the like of the first video.
By way of example, taking the first video as a landscape video, the shooting time of the landscape video is 9 months and 1 day, the shooting place is x.c., the display device adopts the image recognition technology to extract keywords of "sea", "sun" and "ship" from the landscape video, determines the keywords of "autumn" according to the shooting time of the landscape video, and determines the keywords of "x.c. according to the shooting place of the landscape video. Therefore, the display device can determine keywords in multiple dimensions by combining shooting time and place of the video and contents in the video, so that richer and comprehensive keywords are obtained, and a user can know the approximate contents of the video more accurately and comprehensively according to the keywords.
Alternatively, in the embodiment of the present application, the display device may record a video position corresponding to the content keyword in the case of extracting the keyword from the first video. Alternatively, the video location may be a certain video frame or a time stamp of the video.
Taking the first video as video 1 as an example, assuming that the duration of the video 1 is 10s, and keywords "sea" and "sunrise" are extracted at 5 th second of the video 1, the time stamps corresponding to the keywords "sea" and "sunrise" are recorded as 5 th s.
Taking the first video as video 2 as an example, assuming that the video 2 includes 60 video frames, and keywords "sea" and "sunrise" are extracted from 10 th to 30 th video frames of the video 2, video frames corresponding to the keywords "sea" and "sunrise" are recorded to include all video frames from 10 th to 30 th video frames of the video 2.
It should be noted that the above video duration and number of video frames are only one possible example provided by the present application, and do not limit the scheme of the present application, and in practice, the duration and number of video frames in which keywords can be acquired are not limited.
Optionally, in the embodiment of the present application, the display device may also acquire description information input by the user on the video playing interface of the first video, and use the description information input by the user as the keyword of the first video.
For example, when browsing the first video, the user may input the keyword to describe the current video frame by himself, and the display device uses the keyword input by the user as the keyword of the first video.
Taking the first video as the video 3 as an example, in the process of playing the video 3, the user presses the currently played video picture for a long time, inputs "out of sea" in the popped input box, and the display device takes the out of sea "input by the user as the keyword of the video 3, and displays the keyword on the cover map when the cover map of the video 3 is displayed subsequently.
Optionally, in the embodiment of the present application, in the case of obtaining at least one keyword, the display device combines the keywords to obtain a first text, and uses the first text as a video title of the first video.
For example, in combination with the above embodiment, assuming that the keywords of the landscape video include "sunrise", "autumn", and "× city", the display device may generate a text sentence from the keywords using a natural language processing technique, for example, "sunrise seen in" × city in autumn ".
It should be noted that, the natural language processing technology is a technology capable of automatically generating sentences conforming to grammar rules according to given keywords, and the technology can be applied to the fields of machine translation, writing assistance and the like.
Step S202: and displaying text playing content in a preset mode in a display area corresponding to the video cover image under the condition that the video cover image of the first video is displayed.
Wherein the text play content is generated based on the at least one keyword.
Optionally, in an embodiment of the present application, the text playing content may include the at least one keyword; or, the text playing content is text content generated based on the at least one keyword.
Alternatively, in the embodiment of the present application, the video cover image may be a preview image of a video, that is, an image of a video displayed in a page before being played.
It will be appreciated that the video cover images described above may be referred to as video covers.
Optionally, in the embodiment of the present application, the display area corresponding to the video cover image may be a display area where the video cover image is located; or the display area corresponding to the video cover image may be a display area around the display area where the video cover image is located.
Alternatively, in the embodiment of the present application, the display device may display the at least one keyword superimposed on the image of the video cover, or the display device may display the at least one keyword in a blank area around the video cover.
Optionally, in the embodiment of the present application, the display device may receive an input from a user for dragging a keyword out of a display area corresponding to a video cover image, and remove or delete the keyword. For example, when a user desires to delete a certain keyword, the user may press the keyword for a long time to drag out of the screen area where the video cover is located, so as to trigger the display device to delete the keyword.
Optionally, in an embodiment of the present application, displaying the text play content in the preset manner may include at least one of the following:
circularly scrolling and playing text playing contents;
alternatively, in the case where the text play content includes at least one keyword, the at least one keyword is sequentially played.
Alternatively, the above-mentioned cyclic scroll play may be cyclic scroll play in a preset direction. For example, the scrolling playback in the preset direction may be scrolling through the screen from right to left at a constant speed, or scrolling through the screen from left to right at a constant speed.
Alternatively, the sequentially playing the at least one keyword may be sequentially displaying the at least one keyword on a screen in a static manner. Illustratively, the at least one keyword may be displayed at rest on the screen, centered, left-or right-centered at the top, bottom or middle, as embodiments of the present application are not limited in this regard.
The screen refers to a screen area or a display area where a display area corresponding to the video cover image is located.
It should be noted that, after the display device obtains at least one keyword, one possible implementation manner of the display method provided by the embodiment of the present application is to display the keyword in a display area corresponding to the video cover image, and another possible implementation manner is to generate a first text, that is, a video title, based on the at least one keyword, and display the video title in the display area corresponding to the video cover image, and hereinafter, two possible implementation manners are described in an exemplary manner.
A first possible implementation:
in an exemplary embodiment, in a case where at least one keyword corresponding to the first video is determined, the display device cyclically scrolls and plays the at least one keyword in a screen area where the video cover of the first video is located.
Fig. 2 is a schematic diagram of a video content display interface provided in an embodiment of the present application, as shown in fig. 2, taking a first video as a landscape video as an example, where keywords of the landscape video include "sea", "sunrise", "x-city" and "autumn", and when a user needs to view the landscape video, after clicking an "album" mark to enter the album interface, the display device displays a video cover 21 of the landscape video in the album interface, and scrolls and plays "sea", "sunrise", "x-city" and "autumn" from right to left in a moving animation manner in a screen area where the video cover 21 is located.
In fig. 2, in order to display a plurality of keywords separately, each keyword is displayed in a rectangular frame, and may not be displayed in a rectangular frame or may be displayed in other forms in practical applications.
Also illustratively, in the case where at least one keyword corresponding to the first video is determined, each of the at least one keyword is displayed in turn on the video cover, i.e., the keywords are displayed in a slide show form.
For example, in connection with the above-mentioned fig. 2, after the user clicks the "album" mark to enter the album interface, the display device displays the video cover 21 of the landscape video in the album interface, and still displays "sea" 1s at the top middle position of the screen area where the video cover 21 is located, then cancels the display of "sea", then still displays "sunrise" 1s at the top middle position of the screen area where the video cover 21 is located, and further cancels the display of "sunrise", and so on, displays ". Times.city" and "autumn", and after the last keyword, i.e. "autumn", continues to circularly display the keyword from the first keyword, i.e. "sea", in the above-mentioned manner.
A second possible implementation:
in an exemplary case of determining at least one keyword corresponding to the first video, the display device combines the at least one keyword, generates a first text, takes the first text as a video title of the first video, and circularly scrolls and plays the video title in a screen area where a video cover of the first video is located.
Fig. 3 is a schematic diagram of a video content display interface according to an embodiment of the present application, where, as shown in fig. 3, taking a first video as a landscape video, a display device obtains keywords of the landscape video including "sea", "sunrise", "city", and "autumn", and combines the keywords into "sunrise in autumn". In the case that the user needs to view the landscape video, after clicking the "album" mark to enter the album interface, the display device displays the video cover 31 of the landscape video in the album interface, and scrolls and plays "out of the sea seen in the city in autumn" from right to left in a screen area where the video cover 31 is located.
Further, the display device may scroll through text content in a cyclic manner in the area of the screen where the video cover 31 is located in a displacement animation manner.
The displacement animation is an animation type, and refers to an animation obtained by moving an object from a start position to an end position.
In the case of displaying the video cover chart, the video cover chart may be displayed with the video image, and the video frame may be displayed without displaying the video image, but only with text playing content, and fig. 2 and 3 show the case where the video cover does not display the video image.
Also exemplary, in the case of determining at least one keyword corresponding to the first video, the display device combines the at least one keyword to generate a first text and uses the first text as a video title of the first video, and displays the video title at rest in a screen area where a video cover of the first video is located.
According to the display method provided by the embodiment of the application, the at least one keyword or the video title generated based on the at least one keyword is circularly and rollingly played or still displayed on the video cover of the first video, so that a user can intuitively and clearly view the approximate content of the video on the video cover under the condition that the video is not played, and the browsing efficiency of the video is provided.
Alternatively, in the embodiment of the present application, the display device may display the keywords in a predetermined manner according to the frequency of occurrence of each keyword in the first video, so as to highlight some keywords that occur frequently, so as to facilitate visual viewing by the user.
It should be noted that, the occurrence frequency of the keyword may be represented by the number of video frames or video frames related to the keyword in the video, for example, for a 100-frame video, 60 video frames in the video include seas, and 20 video frames include seagulls, where the occurrence frequency of the seas is considered to be higher than the occurrence frequency of the seagulls.
Illustratively, when keywords are displayed on the video cover, keywords that have a higher frequency of occurrence are left on the screen for a longer time, e.g., 3s, and keywords that have a lower frequency of occurrence are left on the screen for a shorter time, e.g., 1s.
Further exemplary, when keywords are displayed on the video cover, keywords having a higher frequency of occurrence are displayed in a larger font, and keywords having a lower frequency of occurrence are displayed in a smaller font.
Further, for example, when keywords are displayed on the video cover, keywords having a higher frequency of occurrence are displayed in a blinking state, and keywords having a lower frequency of occurrence are displayed in a non-blinking state.
Therefore, keywords with different occurrence frequencies are displayed in different display modes in a distinguishing mode, so that a user can intuitively know the main content of the video, and the previewing efficiency of the video is further improved.
According to the display method provided by the embodiment of the application, the display device acquires at least one keyword corresponding to the first video, and under the condition that the video cover image of the first video is displayed, text playing content is displayed in a preset mode in a display area corresponding to the video cover image, wherein the text playing content comprises at least one keyword, or the text playing content is generated based on the at least one keyword. According to the method, the display device displays at least one keyword corresponding to the video on the video cover image of the video, so that a user can visually check the keyword of the video on the video cover image and quickly acquire the main content of the video according to the internal keywords without acquiring the main content of the video after the video is played. Therefore, a user can know the main content of the video when watching the video cover image, and the efficiency of video content display is improved.
Optionally, in the embodiment of the present application, after step S202, the display method provided in the embodiment of the present application further includes the following step S203 and step S204:
step S203: a first input of a user to the text play content is received.
Step S204: in response to the first input, a first video is played.
Optionally, the first input is used to trigger playing the first video.
Alternatively, the first input may include any one of: the touch input, voice input, gesture input, or other feasible input such as key input of the user is not limited in this embodiment of the present application.
Further, the touch input may be: click input, slide input, press input, etc. by the user. Further, the clicking operation may be any number of clicking operations. The above-described sliding operation may be a sliding operation in any direction, for example, an upward sliding, a downward sliding, a leftward sliding, a rightward sliding, or the like, which is not limited in the embodiment of the present application.
In an exemplary manner, in combination with the above embodiment, taking the first video as the video 4 and taking the text playing content including three keywords of "sea", "sunrise" and "autumn" as an example, in the case that the video cover of the video 4 circularly scrolls and plays the three keywords, the user clicks any one keyword, for example, after the user clicks the keyword "sea", the display device starts playing the video 4.
As another example, in combination with the above embodiment, taking the first video as the video 5, taking the case that the text playing content includes "the autumn sunrise" as an example, in the case that the video cover of the video 5 circularly scrolls to play the text content "the autumn sunrise", the display device starts playing the video 4 after clicking the text content.
Therefore, when a user needs to play a certain video, the playing of the video can be quickly triggered by the text playing content displayed on the video cover of the video, so that the flexibility of operation is improved.
Optionally, in an embodiment of the present application, the text playing content includes at least one keyword;
alternatively, the step S203 may include the following step S203a, and the step S204 may be replaced with the following step S204a in combination with the step S203a:
step S203a: a first input of a target keyword by a user is received.
Wherein the target keyword is one of the at least one keyword.
Step S204a: and responding to the first input, jumping to a video clip position corresponding to the target keyword in the first video, and playing the first video.
Optionally, the target keyword is any one or more of the at least one keyword.
Alternatively, the video position corresponding to the target keyword may be a timestamp or a video frame corresponding to the target keyword.
Optionally, in the case of receiving the first input of the target keyword from the user, the display device may jump to the video segment position to play the video content related to the target keyword according to the video segment position corresponding to the target keyword recorded when the target keyword is acquired.
Further, when the video clip position corresponding to the target keyword includes a plurality of video clip positions, the display device may sequentially play videos at the plurality of video positions according to the time sequence of the plurality of video positions.
For example, assuming that the video clip corresponding to the target keyword includes a video clip between the 10 th video frame and the 15 th video frame, the display device takes the 10 th video frame of the video as a starting playing position, and plays the video clip between the 10 th video frame and the 15 th video frame.
Further exemplary, assuming that the video clips corresponding to the target keyword include the 5 th to 7 th s video clips and the 9 th to 12 th s video clips, the display device first uses the 5 th s video clip as a start playing position and plays the 5 th to 7 th s video clips, then uses the 9 th s video clip as a start playing position and plays the 9 th to 12 th s video clips.
In conjunction with the foregoing fig. 2, fig. 4 (a) is a schematic diagram of a video content display interface provided in an embodiment of the present application, as shown in fig. 4 (a), a display device displays a video cover 31 of the landscape video in an album interface, and in a screen area where the video cover 21 is located, the display device displays the video display interface 41, and scrolls from right to left to play the video content of the 10 th s to the 12 th s, and then jumps to the 12 th s to play the video content of the 12 th s to the 20 th s after clicking the keyword "sea" displayed on the screen by the user, assuming that the video clip position corresponding to the keyword "sea" includes the 10 th s to the 12 th s of the video and the 20 th s to the 25 th s of the video, that is, the frames of the 10 th s to the 20 th s of the video include the sea, as shown in fig. 4 (b).
It should be noted that, in fig. 4 (b), the time 00:40 on the video progress bar at the bottom of the screen represents that the total playing duration of the current video is 40s, and the time 00:10 represents that the current video is played to the 10 th second of the video.
In fig. 4 (b), the sea is represented by three curves in order to intuitively represent the sea in the screen.
According to the display method provided by the embodiment of the application, when a user needs to check the video clips corresponding to the specific keywords, the display device can be triggered to directly jump to play the video clips comprising the specific keywords through inputting the specific keywords, and the user is not required to manually position the video positions, so that the convenience and the flexibility of operation are improved.
Optionally, in an embodiment of the present application, the text playing content includes at least one keyword.
Optionally, after the step S202, the display method provided in the embodiment of the present application further includes the following step S205 and step S206:
step S205: a second input by the user of at least two of the at least one keyword is received.
Step S206: and responding to the second input, and synthesizing at least two video clips corresponding to at least two keywords to obtain a second video.
The at least two video clips are video clips in the first video, and one keyword corresponds to at least one video clip.
Optionally, the second input is used for synthesizing at least two video clips corresponding to at least two keywords.
Optionally, the second input may include any of the following: the touch input, voice input, gesture input, or other feasible input such as key input of the user is not limited in this embodiment of the present application.
Further, the touch input may be: click input, slide input, press input, etc. by the user. Further, the clicking operation may be any number of clicking operations. The above-described sliding operation may be a sliding operation in any direction, for example, an upward sliding, a downward sliding, a leftward sliding, a rightward sliding, or the like, which is not limited in the embodiment of the present application.
Illustratively, taking at least two keywords including a third keyword and a fourth keyword as an example, the second input may be an input by which a user drags the third keyword to the fourth keyword; or may be an input dragging the fourth keyword to the third keyword for the user; or may be input by the user for simultaneous or sequential clicking of the third and fourth keywords.
Alternatively, the video segments corresponding to the at least two keywords may be determined based on the video frames associated therewith or based on the associated timestamps.
Alternatively, the display device may use a video synthesis technique to synthesize the at least two video clips to obtain the at least one video clip.
By way of example, in connection with the embodiment corresponding to fig. 2, taking the above-mentioned at least two keywords including "sea" and "sunrise" as an example, after dragging the keyword "sea" on the screen to the keyword "sunrise", the display device obtains at least one video frame including sea in the video, obtains a video segment including a sea picture, and obtains a video frame including a sunrise picture in the video, obtains a video segment including a sunrise picture, and then performs video frame splicing and synthesis on the video segment including a sea picture and the video segment including a sunrise picture, to obtain a video including a sea picture and a sunrise picture.
According to the display method provided by the embodiment of the application, a user can generate new video content by combining keywords, for example, a video segment comprising a sunrise picture and a sea picture can be generated by combining the video segments comprising the sunrise picture and the sea picture, so that a new video with richer content is obtained, a video editing tool is not needed to obtain the new video, and the convenience and the flexibility of generating the video content are greatly improved.
Optionally, in an embodiment of the present application, the text playing content is generated based on at least one keyword. Alternatively, the above step S204 may be replaced with the following step S204b:
step S204b: in response to the first input, the first video is played from a starting position of the first video.
Alternatively, the start position may be a start time position of the first video, which may also be referred to as a start point of the first video.
For example, when the play start time is 0s, the start time position may be 0s of the video. As another example, when the first video frame of the video is the 1 st frame, the start time position may be a corresponding time position when the 1 st frame of the video is played.
In conjunction with the foregoing fig. 3, fig. 5 (a) is a schematic diagram of a video content display interface provided by an embodiment of the present application, as shown in fig. 5 (a), a display device displays a video cover 31 of the landscape video in an album interface, and in a screen area where the video cover 31 is located, the text "the autumn is seen from the ocean sunrise in the city" is circularly scrolled from right to left in a displacement animation mode, and in a case that a user clicks the text "the autumn is seen from the ocean sunrise in the city", the display device displays a video playing interface, and plays the landscape video from a starting position of the video in the video playing interface, where a first frame image of the video includes clouds, as shown in fig. 5 (b).
It should be noted that, in fig. 5 (b), the time 00:40 on the video progress bar at the bottom of the screen indicates that the total playing duration of the current video is 40s, and the time 00:00 indicates that the video is currently played to the 0 th second of the video, i.e. the video is played from the beginning.
Note that, two polygons in fig. 5 (b) represent clouds in the screen.
It can be understood that the above playing from the starting position of the first video may be that the video is played from the beginning when the video is not played, or may be that the video is played from the beginning when the video is paused and the video cover is displayed during the video playing process, after clicking the video title, the video is triggered to play from the beginning.
According to the display method provided by the embodiment of the application, when a user does not start playing or pauses playing in the middle of the video, the display device is triggered to play the video from the head through inputting the video title content played on the video cover, the user does not need to trigger the video playing through a play button, and the user does not need to manually position the video progress in the playing process, so that the convenience and the flexibility of operation are improved.
Optionally, in the embodiment of the present application, before step S201, the display method provided in the embodiment of the present application further includes the following step A1:
step A1: and receiving a third input of a user to a video playing interface of the first video.
Wherein the third input is used for inputting keywords.
Optionally, the third input may include any one of the following: the touch input, voice input, gesture input, or other feasible input such as key input of the user is not limited in this embodiment of the present application.
Further, the touch input may be: click input, slide input, press input, etc. by the user. Further, the clicking operation may be any number of clicking operations. The above-described sliding operation may be a sliding operation in any direction, for example, an upward sliding, a downward sliding, a leftward sliding, a rightward sliding, or the like, which is not limited in the embodiment of the present application.
In combination with the step A1, the step S201 may include the following steps S201a and S201b:
step S201a: input information of the third input is acquired in response to the third input.
Wherein the input information includes at least one first keyword.
Step S201b: at least one first keyword and at least one second keyword extracted from the first video are used as the at least one keyword.
Optionally, the at least one second keyword may be content identification for a first video frame of the first video, and extracted based on the identified content; the first video frame includes at least a portion of a video frame of the first video.
The first video frame may be all video frames of the first video, or the first video frame may be a video frame specified by a user.
For example, during the video playing process, the user may press the video interface for a long time, trigger the display device to identify an image in the video interface, and extract the second keyword from the image according to the identified image content.
It should be noted that, the related description of extracting the keywords from the video may be referred to above, and will not be repeated here.
For example, taking a display device as an electronic device, after the electronic device shoots and stores a video, content analysis and recognition are performed on the video, and keywords of the video, such as time, place, scene, person, etc., are recognized, or the display device may extract the keywords from any video frame in the video playing interface according to the input of the user to the any video frame in the video playing interface, or the display device may use the user input keywords as keywords corresponding to any video frame in the video playing interface according to the input of the user to the keywords input by the user to any video frame in the video playing interface.
For example, as shown in fig. 6 (a), when displaying a video cover image of a video in an album interface, a user may trigger to play the video, and long press a currently playing video frame 61 during the playing process, where the video frame 61 includes the sea and the sun. As shown in fig. 6 (b), when "marine sunrise" is input in the pop-up input box 62, the electronic device uses the "marine sunrise" input by the user and the keywords "ship", "seagull", and "tree" extracted from the video by the electronic device as the keywords of the video together, and thus displays the video cover of the video, the extracted keywords of the electronic device and the keywords input by the user are displayed on the video cover. Therefore, the user can more comprehensively know the content of the video through the keyword pairs when browsing the video cover later.
It should be noted that, time 00:15 on the video progress bar at the bottom of the screen in fig. 6 (a) represents 15 th second of the video currently played.
In order to intuitively represent "sunrise at sea" in the video image, three curves are shown in fig. 6 (a) and 6 (b) to indicate sea, and a circle indicates sun.
In the embodiment of the present application, the display device may first obtain the keywords input by the user, and then extract the keywords from the video, or first extract the keywords from the video, and then obtain the keywords input by the user, and the embodiment of the present application is not limited to the order of the two.
According to the display method provided by the embodiment of the application, the execution main body can be a display device. In the embodiment of the present application, a display device executing a display method is taken as an example, and the display device provided in the embodiment of the present application is described.
Fig. 7 is a schematic structural diagram of a display device according to an embodiment of the present application. As shown in fig. 7, the display device 700 may include: an acquisition module 701 and a display module 702, wherein: an obtaining module 701, configured to obtain at least one keyword corresponding to the first video; the display module 702 is configured to display text playing content in a preset manner in a display area corresponding to a video cover image when the display module 701 displays the video cover image of the first video; wherein the text play content is generated based on at least one keyword.
Optionally, in the embodiment of the present application, displaying the text play content in the preset manner includes at least one of the following:
circularly scrolling and playing text playing contents;
alternatively, in the case where the text play content includes at least one keyword, the at least one keyword is sequentially played.
Optionally, in an embodiment of the present application, as shown in fig. 8, the apparatus further includes: a receiving module 703 and an executing module 704, wherein: a receiving module 703, configured to receive a first input of text play content from a user; the execution module 704 is configured to play the first video in response to the first input received by the receiving module 703.
Optionally, in the embodiment of the present application, the text playing content includes at least one keyword; the receiving module is specifically used for receiving first input of a target keyword from a user, wherein the target keyword is one of at least one keyword; the execution module is specifically configured to jump to a video clip position corresponding to the target keyword in the first video to play the first video in response to the first input received by the receiving module.
Optionally, in the embodiment of the present application, the text playing content includes at least one keyword; the receiving module is also used for receiving second input of at least two keywords in the at least one keyword by a user;
The device further comprises: a processing module; the processing module is used for responding to the second input received by the receiving module, synthesizing at least two video clips corresponding to at least two keywords, and obtaining a second video; the at least two video clips are video clips in the first video, and one keyword corresponds to at least one video clip.
Optionally, in an embodiment of the present application, the apparatus further includes: a receiving module and a determining module; the receiving module is used for receiving a third input of a video playing interface of the first video from a user, wherein the third input is used for inputting keywords;
the acquisition module is specifically configured to respond to the third input received by the receiving module, and acquire input information of the third input, where the input information includes at least one first keyword; and the determining module is used for taking the at least one first keyword acquired by the acquiring module and the at least one second keyword extracted from the first video as at least one keyword.
According to the display device provided by the embodiment of the application, at least one keyword corresponding to the first video is acquired by the display device, and text playing content is displayed in a preset mode in a display area corresponding to the video cover image under the condition that the video cover image of the first video is displayed, wherein the text playing content is generated based on the at least one keyword. According to the method, the display device displays at least one keyword corresponding to the video on the video cover image of the video, so that a user can visually check the keyword of the video on the video cover image and quickly acquire the main content of the video according to the content keyword, and the user does not need to acquire the main content of the video after playing the video. Therefore, a user can know the main content of the video when watching the video cover image, and the efficiency of video content display is improved.
The display device in the embodiment of the application can be an electronic device or a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The display device in the embodiment of the application may be a device having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The display device provided in the embodiment of the present application can implement each process implemented by the method embodiments in fig. 1 to fig. 6 (b), and in order to avoid repetition, a description is omitted here.
Optionally, as shown in fig. 9, the embodiment of the present application further provides an electronic device 900, which includes a processor 901 and a memory 902, where a program or an instruction that can be executed on the processor 901 is stored in the memory 902, and when the program or the instruction is executed by the processor 901, the steps of the embodiment of the display method are implemented, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 10 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, and processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further include a power source (e.g., a battery) for powering the various components, and that the power source may be logically coupled to the processor 110 via a power management system to perform functions such as managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 9 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 110 is configured to obtain at least one keyword corresponding to the first video; a display unit 106, configured to display text playing content in a preset manner in a display area corresponding to the video cover image in a case where the display unit 106 displays the video cover image of the first video; wherein the text play content is generated based on at least one keyword.
Optionally, in the embodiment of the present application, displaying the text play content in the preset manner includes at least one of the following:
circularly scrolling and playing text playing contents;
alternatively, in the case where the text play content includes at least one keyword, the at least one keyword is sequentially played.
Optionally, in an embodiment of the present application, the user input unit 107 is configured to receive a first input of text playing content from a user; the processor 110 is configured to play the first video in response to the first input received by the user input unit 107.
Optionally, in the embodiment of the present application, the text playing content includes at least one keyword; a user input unit 107, specifically configured to receive a first input of a target keyword from a user, where the target keyword is one of at least one keyword; the processor 110 is specifically configured to jump to a video position corresponding to the target keyword in the first video to play the first video in response to the first input received by the user input unit 107.
Optionally, in the embodiment of the present application, the text playing content includes at least one keyword; a user input unit 107 for receiving a second input of at least two keywords of the at least one keyword by a user; a processor 110, configured to perform a synthesis process on at least two video segments corresponding to at least two keywords in response to the second input received by the user input unit 107, so as to obtain a second video; the at least two video clips are video clips in the first video, and one keyword corresponds to at least one video clip.
Optionally, in the embodiment of the present application, the user input unit 107 is configured to receive a third input from a user to a video playing interface of the first video, where the third input is used to input a keyword; the processor 110 is specifically configured to obtain, in response to the third input received by the user input unit 107, input information of the third input, where the input information includes at least one first keyword; the processor 110 is further configured to take the obtained at least one first keyword and at least one second keyword extracted from the first video as at least one keyword.
According to the electronic equipment provided by the embodiment of the application, at least one keyword corresponding to the first video is acquired by the electronic equipment, and text playing content is displayed in a preset mode in a display area corresponding to the video cover image under the condition that the video cover image of the first video is displayed, wherein the text playing content is generated based on the at least one keyword. According to the method, the electronic equipment displays at least one keyword corresponding to the video on the video cover image of the video, so that a user can visually check the keyword of the video on the video cover image and quickly acquire the main content of the video according to the internal keywords, and the user does not need to acquire the main content of the video after playing the video. Therefore, a user can know the main content of the video when watching the video cover image, and the efficiency of video content display is improved.
It should be appreciated that in embodiments of the present application, the input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 1042, the graphics processor 1041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes at least one of a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
Memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 109 may include volatile memory or nonvolatile memory, or the memory 109 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 109 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 110 may include one or more processing units; optionally, the processor 110 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above-described embodiment of the display method, and can achieve the same technical effects, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the embodiment of the display method, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the display method described above, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (12)

1. A display method, the method comprising:
acquiring at least one keyword corresponding to a first video;
displaying text playing content in a preset mode in a display area corresponding to the video cover image under the condition of displaying the video cover image of the first video;
wherein the text play content is generated based on the at least one keyword.
2. The method of claim 1, wherein displaying the text play content in a preset manner comprises at least one of:
circularly scrolling and playing the text playing content;
or, in the case that the text play content includes the at least one keyword, sequentially playing the at least one keyword.
3. The method according to claim 1 or 2, wherein after the text play content is displayed in a preset manner in the display area corresponding to the video cover image, the method further comprises:
receiving a first input of a user to the text play content;
the first video is played in response to the first input.
4. The method of claim 3, wherein the text-playing content includes the at least one keyword;
The receiving a first input of the text play content from a user includes:
receiving a first input of a target keyword from a user, wherein the target keyword is one of the at least one keyword;
the playing the first video in response to the first input, comprising:
and responding to the first input, jumping to a video clip position corresponding to the target keyword in the first video, and playing the first video.
5. The method of claim 1, wherein the text-playing content includes the at least one keyword;
after the text playing content is displayed in the display area corresponding to the video cover image in a preset mode, the method further comprises the following steps:
receiving a second input of a user to at least two keywords of the at least one keyword;
responding to the second input, and synthesizing at least two video clips corresponding to the at least two keywords to obtain a second video;
the at least two video clips are video clips in the first video, and one keyword corresponds to at least one video clip.
6. The method of claim 1, wherein before the obtaining the at least one keyword corresponding to the first video, further comprises:
Receiving a third input of a user to a video playing interface of the first video, wherein the third input is used for inputting keywords;
the obtaining at least one keyword corresponding to the first video includes:
acquiring input information of the third input in response to the third input, wherein the input information comprises at least one first keyword;
and taking the at least one first keyword and at least one second keyword extracted from the first video as the at least one keyword.
7. A display device, the device comprising: the device comprises an acquisition module and a display module, wherein:
the acquisition module is used for acquiring at least one keyword corresponding to the first video;
the display module is used for displaying text playing content in a preset mode in a display area corresponding to the video cover image under the condition that the video cover image of the first video is displayed;
wherein the text play content is generated based on the at least one keyword.
8. The apparatus of claim 7, wherein displaying the text-playing content in a preset manner comprises at least one of:
Circularly scrolling and playing the text playing content;
or, in the case that the text play content includes the at least one keyword, sequentially playing the at least one keyword.
9. The apparatus according to claim 7 or 8, characterized in that the apparatus further comprises: a receiving module and an executing module, wherein:
the receiving module is used for receiving a first input of the text playing content from a user;
the execution module is used for responding to the first input received by the receiving module and playing the first video.
10. The apparatus of claim 9, wherein the text-playing content comprises the at least one keyword;
the receiving module is specifically configured to receive a first input of a target keyword from a user, where the target keyword is one of the at least one keyword;
the execution module is specifically configured to jump to a video clip position corresponding to the target keyword in the first video to play the first video in response to the first input received by the receiving module.
11. The apparatus of claim 7, wherein the text-playing content comprises the at least one keyword;
The receiving module is further used for receiving second input of at least two keywords in the at least one keyword by a user;
the apparatus further comprises: a processing module;
the processing module is used for responding to the second input received by the receiving module, synthesizing at least two video clips corresponding to the at least two keywords, and obtaining a second video;
the at least two video clips are video clips in the first video, and one keyword corresponds to at least one video clip.
12. The apparatus of claim 7, wherein the apparatus further comprises: a receiving module and a determining module;
the receiving module is used for receiving a third input of a user to the video playing interface of the first video, wherein the third input is used for inputting keywords;
the acquisition module is specifically configured to respond to the third input received by the receiving module, and acquire input information of the third input, where the input information includes at least one first keyword;
the determining module is configured to take the at least one first keyword acquired by the acquiring module and at least one second keyword extracted from the first video as the at least one keyword.
CN202310562327.7A 2023-05-17 2023-05-17 Display method and device Pending CN116634215A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310562327.7A CN116634215A (en) 2023-05-17 2023-05-17 Display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310562327.7A CN116634215A (en) 2023-05-17 2023-05-17 Display method and device

Publications (1)

Publication Number Publication Date
CN116634215A true CN116634215A (en) 2023-08-22

Family

ID=87635859

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310562327.7A Pending CN116634215A (en) 2023-05-17 2023-05-17 Display method and device

Country Status (1)

Country Link
CN (1) CN116634215A (en)

Similar Documents

Publication Publication Date Title
EP3758364B1 (en) Dynamic emoticon-generating method, computer-readable storage medium and computer device
CN111612873B (en) GIF picture generation method and device and electronic equipment
CN103780973B (en) Video tab adding method and device
US9378768B2 (en) Methods and systems for media file management
CN112437353A (en) Video processing method, video processing apparatus, electronic device, and readable storage medium
CN113806570A (en) Image generation method and generation device, electronic device and storage medium
CN116634215A (en) Display method and device
Watanabe et al. WillCam: a digital camera visualizing users. interest
CN114584704A (en) Shooting method and device and electronic equipment
EP2527992A1 (en) Generating content data for a video file
CN112202958B (en) Screenshot method and device and electronic equipment
CN112272330B (en) Display method and device and electronic equipment
CN116866670A (en) Video editing method, device, electronic equipment and storage medium
CN116755597A (en) Screenshot file control method and device, electronic equipment and storage medium
CN117395462A (en) Method and device for generating media content, electronic equipment and readable storage medium
CN114285988A (en) Display method, display device, electronic equipment and storage medium
CN117010326A (en) Text processing method and device, and training method and device for text processing model
CN115278378A (en) Information display method, information display device, electronic apparatus, and storage medium
CN116033094A (en) Video editing method and device
CN117724648A (en) Note generation method, device, electronic equipment and readable storage medium
CN116744077A (en) Video note generation method and device
Erol et al. Computing a multimedia representation for documents given time and display constraints
CN115842953A (en) Shooting method and device thereof
CN115665355A (en) Video processing method and device, electronic equipment and readable storage medium
CN115481598A (en) Document display method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination