CN110662104B - Video dragging bar generation method and device, electronic equipment and storage medium - Google Patents

Video dragging bar generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110662104B
CN110662104B CN201910942432.7A CN201910942432A CN110662104B CN 110662104 B CN110662104 B CN 110662104B CN 201910942432 A CN201910942432 A CN 201910942432A CN 110662104 B CN110662104 B CN 110662104B
Authority
CN
China
Prior art keywords
image sequence
area
image
generating
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910942432.7A
Other languages
Chinese (zh)
Other versions
CN110662104A (en
Inventor
谷保震
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Internet Security Software Co Ltd
Original Assignee
Beijing Kingsoft Internet Security Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Internet Security Software Co Ltd filed Critical Beijing Kingsoft Internet Security Software Co Ltd
Priority to CN201910942432.7A priority Critical patent/CN110662104B/en
Publication of CN110662104A publication Critical patent/CN110662104A/en
Application granted granted Critical
Publication of CN110662104B publication Critical patent/CN110662104B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks

Abstract

The application discloses a method and a device for generating a video dragging bar, wherein the method comprises the following steps: acquiring a dragging bar generation request, wherein the generation request comprises a first image sequence; generating a second image sequence according to the first image sequence according to a preset rule, wherein one of the first image sequence and the second image sequence is a video stream, and the other one is a key image set corresponding to the video stream; displaying the video stream in a playing area of a preset canvas; and sequentially displaying the images in the key image set in a dragging strip area of a preset canvas according to the sequence. Therefore, a method for displaying a video stream in a progress bar is provided, which is convenient for a user to quickly select a video stream segment desired to be exported.

Description

Video dragging bar generation method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for generating a video dragging bar.
Background
Video is widely used in daily production and life of users as an information recording medium, and along with the wide application of video, the demand for video-based editing is becoming more diversified. For example, a selection cut is made to certain segments in the video.
In the related art, when a video clip in a video is selected, a progress bar for playing the video needs to be dragged, the video is checked from the position of the progress bar, the video clip is selected by continuously adjusting the progress bar and checking the video at the corresponding position of the progress bar, and the video clip is low in selection efficiency.
Disclosure of Invention
The application provides a method and a device for generating a video dragging bar, which aim to solve the technical problem that in the prior art, the selection efficiency is low when video clips in a video are selected.
The embodiment of the application provides a method for generating a video dragging bar, which comprises the following steps: acquiring a dragging bar generation request, wherein the generation request comprises a first image sequence; generating a second image sequence according to the first image sequence according to a preset rule, wherein one of the first image sequence and the second image sequence is a video stream, and the other one is a key image set corresponding to the video stream; displaying the video stream in a playing area of a preset canvas; and sequentially displaying the images in the key image set in the dragging strip area of the preset canvas according to the sequence.
In addition, the video dragging bar generating method of the embodiment of the application further includes the following additional technical features:
in one possible implementation manner of the present application, the first image sequence is a video stream; generating a second image sequence according to the first image sequence according to a preset rule, wherein the generating of the second image sequence comprises: and extracting key images from the first image sequence according to a preset time interval to form the second image sequence.
In one possible implementation manner of the present application, the first image sequence is a key image set; before generating a second image sequence according to the first image sequence according to a preset rule, the method further includes: acquiring a drag bar duration configuration instruction, wherein the configuration instruction comprises a target duration corresponding to a drag bar; generating a second image sequence according to the first image sequence according to a preset rule, wherein the generating comprises: determining the display time length of each image according to the target time length and the number of images contained in the first image sequence; and sequentially and continuously displaying the images in the first image sequence according to the display duration of each image to generate the second image sequence.
In a possible implementation manner of the present application, sequentially displaying, in order, images in the key image set of the first image sequence and the second image sequence after the dragging bar region of the preset canvas, includes: and in the video stream playing process, according to the corresponding relation between each image in the key image set and the video stream, highlighting and displaying the target image corresponding to the currently played video stream in the dragging bar.
In a possible implementation manner of the present application, before generating the second image sequence according to the first image sequence according to the preset rule, the method further includes: if an editing instruction is obtained, sequentially displaying the first image sequence on a first layer of a playing area of a preset canvas, and generating an editing picture on a second layer of the playing area of the preset canvas according to the editing instruction, wherein the second layer is positioned on the upper layer of the first layer, and the second layer is a transparent layer; generating a second image sequence according to the first image sequence according to a preset rule, wherein the generating of the second image sequence comprises: and generating a second image sequence according to the picture displayed in the playing area according to a preset rule.
In a possible implementation manner of the present application, the playing area of the preset canvas includes a first area and a second area, the first area is a transparent area, the second area is a non-transparent area, and the display priority of the second area is higher than the display priority of the first area; before generating a second image sequence according to the first image sequence according to a preset rule, the method further includes: if an image cutting request is obtained, displaying the first image sequence in a first area of the preset canvas; when a cutting instruction is obtained, adjusting the distribution mode of the first area and the second area in the playing area according to the cutting area in the cutting instruction; generating a second image sequence according to the first image sequence according to a preset rule, wherein the generating of the second image sequence comprises: and generating a second image sequence according to the image sequence displayed in the target area according to a preset rule, wherein the target area is a partial area which is not covered by the second area in the first area.
Another embodiment of the present application provides a video dragging bar generating apparatus, including: the system comprises a first acquisition module, a second acquisition module and a first display module, wherein the first acquisition module is used for acquiring a dragging bar generation request which comprises a first image sequence; a generating module, configured to generate a second image sequence according to the first image sequence according to a preset rule, where one of the first image sequence and the second image sequence is a video stream, and the other is a key image set corresponding to the video stream; the display module is used for displaying the video stream in a playing area of a preset canvas; the display module is further configured to sequentially display the images in the key image set in the dragging bar area of the preset canvas according to a sequence.
In addition, the video dragging bar generating device of the embodiment of the application further comprises the following additional technical features:
in one possible implementation manner of the present application, the first image sequence is a video stream; the generating module is specifically configured to: and extracting key images from the first image sequence according to a preset time interval to form the second image sequence.
In one possible implementation manner of the present application, the first image sequence is a key image set; the device further comprises: the second acquisition module is used for acquiring a drag bar duration configuration instruction, wherein the configuration instruction comprises a target duration corresponding to the drag bar; the generation module is specifically configured to: determining the display time length of each image according to the target time length and the number of images contained in the first image sequence; and sequentially and continuously displaying the images in the first image sequence according to the display duration of each image to generate the second image sequence.
In a possible implementation manner of the present application, the display module is further configured to: and in the video stream playing process, according to the corresponding relation between each image in the key image set and the video stream, highlighting and displaying the target image corresponding to the currently played video stream in the dragging bar.
In one possible implementation manner of the present application, the method further includes: the editing module is configured to, before generating a second image sequence according to the first image sequence according to a preset rule, if an editing instruction is obtained, sequentially display the first image sequence on a first layer of a playing area of a preset canvas, and generate an editing picture on a second layer of the playing area of the preset canvas according to the editing instruction, where the second layer is located on an upper layer of the first layer and is a transparent layer; the generation module is specifically configured to: and generating a second image sequence according to the picture displayed in the playing area according to a preset rule.
In a possible implementation manner of the present application, the playing area of the preset canvas includes a first area and a second area, the first area is a transparent area, the second area is a non-transparent area, and the display priority of the second area is higher than the display priority of the first area; the display module is further configured to, before generating a second image sequence according to the first image sequence according to the preset rule, if an image clipping request is obtained, display the first image sequence in a first area of the preset canvas, and when a clipping instruction is obtained, adjust a distribution manner of the first area and the second area in the playing area according to a clipping area in the clipping instruction; the generation module is specifically configured to: and generating a second image sequence according to the image sequence displayed in the target area according to a preset rule, wherein the target area is a partial area which is not covered by the second area in the first area.
Yet another embodiment of the present application provides an electronic device, which includes a memory and a processor, where the memory stores computer-readable instructions, and the instructions, when executed by the processor, cause the processor to execute the method for generating a video dragging bar according to the above-mentioned embodiment of the present application.
Yet another embodiment of the present application provides a non-transitory computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the video dragging bar generating method according to the above embodiment of the present application.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
the method comprises the steps of obtaining a dragging strip generation request, wherein the generation request comprises a first image sequence, generating a second image sequence according to the first image sequence according to a preset rule, wherein one of the first image sequence and the second image sequence is a video stream, the other one of the first image sequence and the second image sequence is a key image set corresponding to the video stream, displaying the video streams in the first image sequence and the second image sequence in a playing area of a preset canvas, and further sequentially displaying images in the key image set in the first image sequence and the second image sequence in the dragging strip area of the preset canvas according to the sequence. Therefore, a method for displaying the video stream in the progress bar is provided, which is convenient for a user to quickly select the video stream segment to be derived.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow diagram of a video drag bar generation method according to one embodiment of the present application;
FIG. 2 is a schematic diagram of a display interface of a preset canvas according to an embodiment of the present application;
FIG. 3 is a schematic view of a first region and a second region distribution according to one embodiment of the present application;
FIG. 4 is a schematic view of a first region and a second region distribution according to another embodiment of the present application;
FIG. 5 is a schematic diagram of a video drag bar generation scene according to one embodiment of the present application;
FIG. 6 is a schematic structural diagram of a video dragging bar generating device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a video dragging bar generating apparatus according to another embodiment of the present application;
fig. 8 is a schematic structural diagram of a video dragging bar generating apparatus according to another embodiment of the present application;
FIG. 9 is a schematic structural diagram of an electronic device according to one embodiment of the present application.
Detailed Description
Reference will now be made in detail to the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
A video drag bar generation method and apparatus according to an embodiment of the present application are described below with reference to the drawings.
The execution main body of the video dragging bar generation method in the embodiment of the application can be a hardware device with an image processor, such as a mobile phone, a tablet computer, a personal digital assistant and a wearable device, and the wearable device can be an intelligent bracelet, an intelligent watch and intelligent glasses.
Fig. 1 is a flowchart of a video drag bar generation method according to an embodiment of the present application, as shown in fig. 1, the method including:
step 101, a request for generating a dragging bar is obtained, and the request for generating the dragging bar comprises a first image sequence.
The first image sequence may be composed of a plurality of images, or may be a video stream including a plurality of consecutive frames of images.
It should be noted that, in different application scenarios, the manner of acquiring the drag bar generation request is different, and the example is as follows:
the first example is:
in this example, the user sends a get drag bar generation request in a voice manner.
Specifically, after the user selects the first image sequence, voice data of the user is collected by the sound pickup device, and when a keyword such as "drag bar generation" is recognized in the collected voice data of the user, a drag bar generation request for the selected first image sequence is acquired.
The second example is:
in this example, the user sends a drag bar generation request in the form of an action.
Specifically, after a first image sequence is selected by a user, gesture actions or facial expression actions of the user are collected through a camera or a touch screen, the collected actions are matched with preset actions, and if the matching is successful, a request of the user for generating a dragging bar of the first image sequence is obtained.
Step 102, according to a preset rule, generating a second image sequence according to the first image sequence, wherein one of the first image sequence and the second image sequence is a video stream, and the other one is a key image set corresponding to the video stream.
Specifically, according to a preset rule, a second image sequence is generated according to a first image sequence, wherein one of the first image sequence and the second image sequence is a video stream, and the other one is a corresponding key image set in the video stream. For example, when the first image sequence is a multi-frame image, a video stream is generated as the second image sequence according to the multi-frame image, or for example, when the first image sequence is a video stream, the second image sequence is identified and determined according to image frames in the video stream.
As a possible implementation manner, the first image sequence is a video stream, and the key images are extracted from the first image sequence according to a preset time interval to form a second image sequence, where the preset time interval is greater than or equal to the playing time of each frame of image in the video stream, and when the preset time interval is equal to the playing time of each frame of image in the video stream, each frame of image in the video stream can be traversed as a key image to form the second image sequence.
As another possible implementation manner, a drag bar duration configuration instruction is obtained if the first image sequence is a key image set, where the configuration instruction includes a target duration corresponding to a drag bar, that is, a duration of the drag bar is predefined, and further, a display duration of each image is determined according to the target duration and the number of images included in the first image sequence, for example, a division value of the target duration and the number of images is used as the display duration, and then, according to the display duration of each image, the images in the first image sequence are sequentially and continuously displayed to generate the second image sequence.
As another possible implementation manner, image content in the video stream is identified, and when the image content includes a subject image that the user wants to select, the corresponding image is taken as an image in the key image set.
In the actual execution process, in order to meet the personalized requirements of the user, the video editing instruction of the user can be responded.
Specifically, if an editing instruction is obtained, sequentially displaying the first image sequence on a first layer of a playing area of a preset canvas, and generating an editing picture on a second layer of the playing area of the preset canvas according to the editing instruction, wherein the second layer is located on an upper layer of the first layer, and the second layer is a transparent layer, and a transparency of the second layer can be determined according to a requirement of a user on a blurring degree of a picture in the first layer. Thus, as shown in fig. 2, in the preset canvas, a video screen in the first layer and an editing screen in the second layer (the editing screen in the drawing is character addition) may be displayed.
In this example, the editing instruction includes a target editing mode, which corresponds to specific editing contents of the video, such as text addition (format of text, etc.), animation addition, special effect addition (addition of particle effect such as fireworks, etc.), color change, filter addition energy, etc., where different target editing modes may be combined, such as combination of text addition and animation addition modes, and an effect of continuously turning text left and right may be achieved. And further, generating an editing picture in a second layer of the preset canvas according to the target editing mode.
And further, according to a preset rule, generating a second image sequence according to the picture displayed in the playing area.
In addition, considering that in some application scenarios, when the user edits the first video data, the user has a cropping requirement on the corresponding picture, for example, the user only wants to keep a part of the character image in the picture, and the like, in order to meet the personalized requirement of the user, in one embodiment of the invention, the user responds to the cropping instruction.
In this embodiment, the playing area of the preset canvas includes a first area and a second area, wherein the number of the first area and the second area can be set arbitrarily, the first area is a transparent area, the second area is a non-transparent area, and the display priority of the second area is higher than that of the first area. That is to say, after a picture in a canvas is moved from a first area to a second area, a non-transparent picture in the second area is displayed instead of a corresponding picture, if an image clipping request is obtained, a first image sequence is displayed in the first area of a preset canvas, and when a clipping instruction is obtained, the distribution modes of the first area and the second area in the playing area are adjusted according to the clipping area in the clipping instruction. Namely, the area outside the cutting area in the first area is covered by the second area, so that the second image sequence is generated according to the image sequence displayed in the target area according to the preset rule, wherein the target area is the partial area which is not covered by the second area in the first area, the picture in the second area is not displayed any more, and the cutting operation of the related picture is realized.
The triggering mode of the cutting instruction can be implemented through a touch screen action track or through a selection operation of transferring a cutting area selection tool. The cutting area may be any shape such as a circle, a square, etc., which is not exemplified herein.
In addition, in different application scenes, the distribution modes of the first area and the second area in the preset canvas are adjusted to be different according to the clipping area in the clipping instruction, as a possible implementation mode, when the number and the distribution modes of the second area and the first area are shown in fig. 3, the moving operation of a user on the picture in the first area can be received, when the user moves the display picture lower than the area left and right, the display picture can be clipped, and all the display pictures shielded by the second area are equal to the clipped display pictures.
As another possible implementation manner, when the number and the distribution manner of the second area and the first area are as shown in fig. 4, the second area may be covered in the picture of the corresponding first area through a drag operation on the second area, and all display pictures that are covered by the second area are equivalent to being cropped.
And 103, displaying the video streams in the first image sequence and the second image sequence in a playing area of a preset canvas.
And 104, sequentially displaying the images in the key image set in the first image sequence and the second image sequence in a dragging strip area of a preset canvas according to the sequence.
Specifically, the video streams in the first image sequence and the second image sequence are displayed in a playing area of a preset canvas. And sequentially displaying the images in the key image set in the first image sequence and the second image sequence in the dragging strip area of the preset canvas according to the sequence. Thus, a user may view a particular set of images contained in a video in a progress bar.
In an embodiment of the present invention, during the playing of the video streams in the first image sequence and the second image sequence, according to the corresponding relationship between each image in the key image set and the video stream, as shown in fig. 5, the target image corresponding to the currently played video stream is highlighted in the dragging bar, for example, during the playing of the video streams in the first image sequence and the second image sequence, according to the corresponding relationship between each image in the key image set and the video stream, the target image corresponding to the currently played video stream is highlighted, for example, highlighted in the dragging bar.
Therefore, the video dragging bar generation method according to the embodiment of the present application supports display of a video and a key image set, and a user can operate the playing progress of the video through the image content in the dragging bar, for example, as shown in fig. 5, according to a specific image displayed in the dragging bar, a starting playing position and an ending playing position of the video which is desired to be played are accurately located, for example, selection from the starting position to the ending playing position can be realized through the width of the dragging progress bar, and when the video is played to the ending point, the video is played from the starting point position again.
To sum up, the method for generating a video dragging strip according to the embodiment of the present application obtains a dragging strip generation request, where the generation request includes a first image sequence, and generates a second image sequence according to the first image sequence according to a preset rule, where one of the first image sequence and the second image sequence is a video stream, and the other is a key image set corresponding to the video stream, and displays the video streams in the first image sequence and the second image sequence in a playing area of a preset canvas, and further sequentially displays images in the key image sets in the first image sequence and the second image sequence in a dragging strip area of the preset canvas according to an order. Therefore, a method for displaying the video stream in the progress bar is provided, which is convenient for a user to quickly select the video stream segment to be derived.
In order to implement the above embodiment, the present application further provides a video dragging bar generating device.
Fig. 6 is a schematic structural diagram of a video dragging bar generating apparatus according to an embodiment of the present application, and as shown in fig. 6, the video dragging bar generating apparatus includes: a first obtaining module 100, a generating module 200, and a displaying module 300, wherein,
the first obtaining module 100 is configured to obtain a request for generating a dragging bar, where the request includes a first image sequence.
The generating module 200 is configured to generate a second image sequence according to a first image sequence according to a preset rule, where one of the first image sequence and the second image sequence is a video stream, and the other is a key image set corresponding to the video stream.
The display module 300 is configured to display the video stream in a play area of a preset canvas.
In this embodiment, the display module 300 is further configured to sequentially display the images in the key image set in the dragging bar area of the preset canvas according to the order.
In an embodiment of the application, the first image sequence is a video stream, and the generating module 200 is specifically configured to:
and extracting key images from the first image sequence according to a preset time interval to form a second image sequence.
In an embodiment of the present application, the first image sequence is a key image set, as shown in fig. 7, and on the basis of the key image set shown in fig. 6, the apparatus further includes: a second obtaining module 400, where the second obtaining module 400 is configured to obtain a configuration instruction of the duration of the dragging bar, and the configuration instruction includes a target duration corresponding to the dragging bar.
In this embodiment, the generating module 200 is specifically configured to:
determining the display time length of each image according to the target time length and the number of images contained in the first image sequence;
and sequentially and continuously displaying the images in the first image sequence according to the display time length of each image to generate a second image sequence.
In one embodiment of the present application, the display module 300 is further configured to:
and in the process of playing the video streams in the first image sequence and the second image sequence, according to the corresponding relation between each image in the key image set and the video stream, highlighting and displaying the target image corresponding to the currently played video stream in the dragging bar.
In one embodiment of the present application, as shown in fig. 8, on the basis of fig. 7, the apparatus further includes: the editing module 500 is configured to, before generating the second image sequence according to the first image sequence according to the preset rule, sequentially display the first image sequence on a first layer of a playing area of a preset canvas if an editing instruction is obtained, and generate an editing picture on a second layer of the playing area of the preset canvas according to the editing instruction, where the second layer is located on an upper layer of the first layer, and the second layer is a transparent layer.
In this embodiment, the generating module 200 is specifically configured to: and generating a second image sequence according to the picture displayed in the playing area according to a preset rule.
In an embodiment of the application, the playing area of the preset canvas includes a first area and a second area, the first area is a transparent area, the second area is a non-transparent area, and the display priority of the second area is higher than the display priority of the first area, the display module 300 is further configured to display the first image sequence in the first area of the preset canvas if the image clipping request is obtained before the second image sequence is generated according to the first image sequence according to the preset rule, and adjust a distribution manner of the first area and the second area in the playing area according to the clipping area in the clipping instruction when the clipping instruction is obtained.
In this embodiment, the generating module 200 is specifically configured to:
and generating a second image sequence according to the image sequence displayed in the target area according to a preset rule, wherein the target area is a partial area which is not covered by the second area in the first area.
It should be noted that the foregoing explanation on the embodiment of the method for generating a video dragging bar is also applicable to the device for generating a video dragging bar of this embodiment, and is not repeated here.
To sum up, the video dragging strip generation apparatus of the embodiment of the present application obtains a dragging strip generation request, where the generation request includes a first image sequence, and generates a second image sequence according to a preset rule and the first image sequence, where one of the first image sequence and the second image sequence is a video stream, and the other is a key image set corresponding to the video stream, and displays the video streams in the first image sequence and the second image sequence in a playing area of a preset canvas, and further sequentially displays images in the key image sets in the first image sequence and the second image sequence in a dragging strip area of the preset canvas according to an order. Therefore, a method for displaying a video stream in a progress bar is provided, which is convenient for a user to quickly select a video stream segment desired to be exported.
In order to implement the foregoing embodiments, an electronic device is further provided in an embodiment of the present application, including a processor and a memory;
wherein, the processor runs a program corresponding to the executable program code by reading the executable program code stored in the memory, so as to implement the video dragged strip generating method as described in the above embodiments.
FIG. 9 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present application. The electronic device 12 shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 9, the electronic device 12 is in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. These architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, to name a few.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 30 and/or cache Memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 9, and commonly referred to as a "hard drive"). Although not shown in FIG. 9, a disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk Read Only Memory (CD-ROM), a Digital versatile disk Read Only Memory (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the application.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally perform the functions and/or methodologies of the embodiments described herein.
Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with electronic device 12, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public Network such as the Internet) via the Network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 via the bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing, for example, implementing the methods mentioned in the foregoing embodiments, by executing programs stored in the system memory 28.
In order to implement the foregoing embodiments, the present application further proposes a non-transitory computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the video dragged bar generation method described in the foregoing embodiments.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to implicitly indicate the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are well known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are exemplary and should not be construed as limiting the present application and that changes, modifications, substitutions and alterations in the above embodiments may be made by those of ordinary skill in the art within the scope of the present application.

Claims (12)

1. A method for generating a video dragging bar, comprising:
acquiring a dragging bar generation request, wherein the generation request comprises a first image sequence;
generating a second image sequence according to the first image sequence according to a preset rule, wherein one of the first image sequence and the second image sequence is a video stream, and the other one is a key image set corresponding to the video stream;
displaying the video stream in a playing area of a preset canvas;
sequentially displaying the images in the key image set in a dragging strip area of the preset canvas according to the sequence;
the playing area of the preset canvas comprises a first area and a second area, the first area is a transparent area, the second area is a non-transparent area, and the display priority of the second area is higher than that of the first area;
before generating a second image sequence according to the first image sequence according to a preset rule, the method further includes:
if an image cutting request is obtained, displaying the first image sequence in a first area of the preset canvas;
when a cutting instruction is obtained, adjusting the distribution mode of the first area and the second area in the playing area according to the cutting area in the cutting instruction;
generating a second image sequence according to the first image sequence according to a preset rule, wherein the generating comprises:
and generating a second image sequence according to the image sequence displayed in the target area according to a preset rule, wherein the target area is a partial area which is not covered by the second area in the first area.
2. The method of claim 1, wherein the first sequence of images is a video stream;
generating a second image sequence according to the first image sequence according to a preset rule, wherein the generating of the second image sequence comprises:
and extracting key images from the first image sequence according to a preset time interval to form the second image sequence.
3. The method of claim 1, wherein the first image series is a key image set;
before generating a second image sequence according to the first image sequence according to a preset rule, the method further includes:
acquiring a drag bar duration configuration instruction, wherein the configuration instruction comprises a target duration corresponding to a drag bar;
generating a second image sequence according to the first image sequence according to a preset rule, wherein the generating of the second image sequence comprises:
determining the display time length of each image according to the target time length and the number of images contained in the first image sequence;
and sequentially and continuously displaying the images in the first image sequence according to the display duration of each image to generate the second image sequence.
4. The method of claim 1, wherein the sequentially displaying the images in the key image set after the dragging bar region of the preset canvas in order comprises:
and in the video stream playing process, according to the corresponding relation between each image in the key image set and the video stream, highlighting and displaying a target image corresponding to the currently played video stream in the dragging bar.
5. The method according to any of claims 1-4, wherein before generating the second image sequence from the first image sequence according to the predetermined rule, further comprising:
if an editing instruction is obtained, sequentially displaying the first image sequence on a first layer of a playing area of a preset canvas, and generating an editing picture on a second layer of the playing area of the preset canvas according to the editing instruction, wherein the second layer is positioned on the upper layer of the first layer, and the second layer is a transparent layer;
generating a second image sequence according to the first image sequence according to a preset rule, wherein the generating of the second image sequence comprises:
and generating a second image sequence according to the picture displayed in the playing area according to a preset rule.
6. A video drag bar generation apparatus, comprising:
the system comprises a first acquisition module, a second acquisition module and a first display module, wherein the first acquisition module is used for acquiring a dragging bar generation request which comprises a first image sequence;
the generating module is used for generating a second image sequence according to the first image sequence according to a preset rule, wherein one of the first image sequence and the second image sequence is a video stream, and the other one is a key image set corresponding to the video stream;
the display module is used for displaying the video stream in a playing area of a preset canvas;
the display module is further configured to sequentially display the images in the key image set in the dragging bar area of the preset canvas according to a sequence;
the playing area of the preset canvas comprises a first area and a second area, the first area is a transparent area, the second area is a non-transparent area, and the display priority of the second area is higher than that of the first area;
the display module is further configured to display the first image sequence in a first area of the preset canvas if an image clipping request is obtained before a second image sequence is generated according to the first image sequence according to the preset rule, and adjust a distribution mode of the first area and the second area in the playing area according to a clipping area in the clipping instruction when the clipping instruction is obtained;
the generation module is specifically configured to:
and generating a second image sequence according to the image sequence displayed in the target area according to a preset rule, wherein the target area is a partial area which is not covered by the second area in the first area.
7. The apparatus of claim 6, wherein the first sequence of images is a video stream; the generation module is specifically configured to:
and extracting key images from the first image sequence according to a preset time interval to form the second image sequence.
8. The apparatus of claim 6, wherein the first image sequence is a key image set; the device further comprises:
the second acquisition module is used for acquiring a drag bar duration configuration instruction, wherein the configuration instruction comprises a target duration corresponding to the drag bar;
the generation module is specifically configured to:
determining the display time length of each image according to the target time length and the number of images contained in the first image sequence;
and sequentially and continuously displaying the images in the first image sequence according to the display duration of each image to generate the second image sequence.
9. The apparatus of claim 6, wherein the display module is further configured to:
and in the video stream playing process, according to the corresponding relation between each image in the key image set and the video stream, highlighting and displaying the target image corresponding to the currently played video stream in the dragging bar.
10. The apparatus of claim 6, further comprising:
the editing module is configured to, before the second image sequence is generated according to the first image sequence according to the preset rule, sequentially display the first image sequence on a first layer of a playing area of a preset canvas if an editing instruction is obtained, and generate an editing picture on a second layer of the playing area of the preset canvas according to the editing instruction, where the second layer is located on an upper layer of the first layer, and the second layer is a transparent layer;
the generation module is specifically configured to:
and generating a second image sequence according to the picture displayed in the playing area according to a preset rule.
11. An electronic device comprising a processor and a memory;
wherein the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, for implementing the video dragged strip generation method according to any one of claims 1 to 5.
12. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the video dragged bar generation method of any of claims 1-5.
CN201910942432.7A 2019-09-30 2019-09-30 Video dragging bar generation method and device, electronic equipment and storage medium Active CN110662104B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910942432.7A CN110662104B (en) 2019-09-30 2019-09-30 Video dragging bar generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910942432.7A CN110662104B (en) 2019-09-30 2019-09-30 Video dragging bar generation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110662104A CN110662104A (en) 2020-01-07
CN110662104B true CN110662104B (en) 2022-05-31

Family

ID=69040277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910942432.7A Active CN110662104B (en) 2019-09-30 2019-09-30 Video dragging bar generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110662104B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100018162A (en) * 2008-08-06 2010-02-17 주식회사 케이티테크 Method of playing video contents by using skip function and method of generating thumbnail image by using skip function
CN102638658A (en) * 2012-03-01 2012-08-15 盛乐信息技术(上海)有限公司 Method and system for editing audio-video
CN105072354A (en) * 2015-07-17 2015-11-18 Tcl集团股份有限公司 Method and system of synthesizing video stream by utilizing a plurality of photographs
CN105554579A (en) * 2015-11-05 2016-05-04 广州爱九游信息技术有限公司 Video frame selection auxiliary method and device and computing equipment capable of playing video
CN105933773A (en) * 2016-05-12 2016-09-07 青岛海信传媒网络技术有限公司 Video editing method and system
CN106851385A (en) * 2017-02-20 2017-06-13 北京金山安全软件有限公司 Video recording method and device and electronic equipment
CN108038185A (en) * 2017-12-08 2018-05-15 广州市百果园信息技术有限公司 Video dynamic edit methods, device and intelligent mobile terminal
CN108090102A (en) * 2016-11-21 2018-05-29 法乐第(北京)网络科技有限公司 A kind of video processing equipment, vehicle and method for processing video frequency
CN108833787A (en) * 2018-07-19 2018-11-16 百度在线网络技术(北京)有限公司 Method and apparatus for generating short-sighted frequency
CN108965599A (en) * 2018-07-23 2018-12-07 Oppo广东移动通信有限公司 Recall method for processing video frequency and Related product

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9619108B2 (en) * 2011-01-14 2017-04-11 Adobe Systems Incorporated Computer-implemented systems and methods providing user interface features for editing multi-layer images
EP2662859B1 (en) * 2012-05-07 2018-11-14 LG Electronics Inc. Mobile terminal for capturing an image in a video and controlling method thereof

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100018162A (en) * 2008-08-06 2010-02-17 주식회사 케이티테크 Method of playing video contents by using skip function and method of generating thumbnail image by using skip function
CN102638658A (en) * 2012-03-01 2012-08-15 盛乐信息技术(上海)有限公司 Method and system for editing audio-video
CN105072354A (en) * 2015-07-17 2015-11-18 Tcl集团股份有限公司 Method and system of synthesizing video stream by utilizing a plurality of photographs
CN105554579A (en) * 2015-11-05 2016-05-04 广州爱九游信息技术有限公司 Video frame selection auxiliary method and device and computing equipment capable of playing video
CN105933773A (en) * 2016-05-12 2016-09-07 青岛海信传媒网络技术有限公司 Video editing method and system
CN108090102A (en) * 2016-11-21 2018-05-29 法乐第(北京)网络科技有限公司 A kind of video processing equipment, vehicle and method for processing video frequency
CN106851385A (en) * 2017-02-20 2017-06-13 北京金山安全软件有限公司 Video recording method and device and electronic equipment
CN108038185A (en) * 2017-12-08 2018-05-15 广州市百果园信息技术有限公司 Video dynamic edit methods, device and intelligent mobile terminal
CN108833787A (en) * 2018-07-19 2018-11-16 百度在线网络技术(北京)有限公司 Method and apparatus for generating short-sighted frequency
CN108965599A (en) * 2018-07-23 2018-12-07 Oppo广东移动通信有限公司 Recall method for processing video frequency and Related product

Also Published As

Publication number Publication date
CN110662104A (en) 2020-01-07

Similar Documents

Publication Publication Date Title
CN110868631B (en) Video editing method, device, terminal and storage medium
CN113923301A (en) Apparatus, method and graphical user interface for capturing and recording media in multiple modes
CN110636365B (en) Video character adding method and device, electronic equipment and storage medium
EP3454196A1 (en) Method and apparatus for editing object
US20110170008A1 (en) Chroma-key image animation tool
US20130300750A1 (en) Method, apparatus and computer program product for generating animated images
CN112653920B (en) Video processing method, device, equipment and storage medium
CN110572717A (en) Video editing method and device
US11941728B2 (en) Previewing method and apparatus for effect application, and device, and storage medium
EP4273808A1 (en) Method and apparatus for publishing video, device, and medium
CN112801004A (en) Method, device and equipment for screening video clips and storage medium
CN108845741B (en) AR expression generation method, client, terminal and storage medium
CN113918522A (en) File generation method and device and electronic equipment
US7844901B1 (en) Circular timeline for video trimming
CN112887794B (en) Video editing method and device
CN111679772B (en) Screen recording method and system, multi-screen device and readable storage medium
CN110662104B (en) Video dragging bar generation method and device, electronic equipment and storage medium
CN110703973B (en) Image cropping method and device
US10817167B2 (en) Device, method and computer program product for creating viewable content on an interactive display using gesture inputs indicating desired effects
CN115460448A (en) Media resource editing method and device, electronic equipment and storage medium
CN113873319A (en) Video processing method and device, electronic equipment and storage medium
CN111757177B (en) Video clipping method and device
CN114374872A (en) Video generation method and device, electronic equipment and storage medium
US10637905B2 (en) Method for processing data and electronic apparatus
CN112445398A (en) Method, electronic device and computer readable medium for editing pictures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant