CN113099288A - Video production method and device - Google Patents

Video production method and device Download PDF

Info

Publication number
CN113099288A
CN113099288A CN202110350555.9A CN202110350555A CN113099288A CN 113099288 A CN113099288 A CN 113099288A CN 202110350555 A CN202110350555 A CN 202110350555A CN 113099288 A CN113099288 A CN 113099288A
Authority
CN
China
Prior art keywords
image
range
video
determining
browser
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110350555.9A
Other languages
Chinese (zh)
Inventor
姜山
邵帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202110350555.9A priority Critical patent/CN113099288A/en
Publication of CN113099288A publication Critical patent/CN113099288A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a video production method and a video production device, wherein the video production method comprises the following steps: according to the uploaded image to be made and a preset display rule, establishing a corresponding image time axis for the image to be made in a making interface provided by a browser; under the condition that the movement operation aiming at the range selection control on the image time axis is received, determining the selected initial image range according to the movement information of the range selection control; receiving processing operation aiming at the initial image range, and determining a target image range according to the processing operation; and synthesizing a video according to the images included in the target image range. Therefore, the video can be manufactured through the browser, the threshold for manufacturing the video is low, the time for manufacturing the video is shortened, and the manufacturing efficiency is improved.

Description

Video production method and device
Technical Field
The present application relates to the field of computer technologies, and in particular, to a video production method. The application also relates to a video production apparatus, a computing device, and a computer-readable storage medium.
Background
With the rapid development of computer technology and image processing technology, videos are increasingly favored by people. In the prior art, if a video needs to be produced, a special application program (including some charging software) needs to be downloaded and installed, and then an operation course of the application program needs to be learned, so that the video required by the user is produced. However, the above video production method needs to additionally download and install the corresponding application program, learn and master the use course of the application program, and the production process of the whole video is complicated, so that the production threshold of the video is high, the time is long, the efficiency is low, and the rapid production and delivery requirements of advertisers cannot be met.
Disclosure of Invention
In view of this, the present application provides a video production method. The application also relates to a video production device, a computing device and a computer readable storage medium, which are used for solving the problem of low video production efficiency in the prior art.
According to a first aspect of the embodiments of the present application, there is provided a video production method applied in a browser, including:
creating a corresponding image time axis for the image to be manufactured in a manufacturing interface provided by the browser according to the uploaded image to be manufactured and a preset display rule;
under the condition that the movement operation aiming at the range selection control on the image time axis is received, determining the selected initial image range according to the movement information of the range selection control;
receiving processing operation aiming at the initial image range, and determining a target image range according to the processing operation;
and synthesizing a video according to the images included in the target image range.
According to a second aspect of the embodiments of the present application, there is provided a video production apparatus, which is applied to a browser, and includes:
the creating module is configured to create a corresponding image time axis for the image to be made in a making interface provided by the browser according to the uploaded image to be made and a preset display rule;
a first determination module configured to determine a selected initial image range according to movement information of a range selection control on the image timeline in a case where a movement operation for the range selection control is received;
a second determination module configured to receive a processing operation for the initial image range and determine a target image range according to the processing operation;
a composition module configured to compose a video from images included in the target image range.
According to a third aspect of embodiments herein, there is provided a computing device comprising:
a memory and a processor;
the memory is to store computer-executable instructions, and the processor is to execute the computer-executable instructions to:
according to the uploaded image to be made and a preset display rule, establishing a corresponding image time axis for the image to be made in a making interface provided by a browser;
under the condition that the movement operation aiming at the range selection control on the image time axis is received, determining the selected initial image range according to the movement information of the range selection control;
receiving processing operation aiming at the initial image range, and determining a target image range according to the processing operation;
and synthesizing a video according to the images included in the target image range.
According to a fourth aspect of embodiments herein, there is provided a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, implement the steps of any of the video production methods.
According to the video production method, a corresponding image time axis can be established for the image to be produced in a production interface provided by a browser according to the uploaded image to be produced and a preset display rule; under the condition that the movement operation aiming at the range selection control on the image time axis is received, determining the selected initial image range according to the movement information of the range selection control; receiving processing operation aiming at the initial image range, and determining a target image range according to the processing operation; and synthesizing a video according to the images included in the target image range. Under the condition, the intelligent video production method based on the browser is provided, the production of the video can be realized by installing the browser on the computer, a separate application program does not need to be downloaded and installed, the threshold for producing the video is low, the time for producing the video is shortened, and the production efficiency is improved. In addition, the required image range can be customized through the image time axis, and then the corresponding video is synthesized according to the image in the customized image range, so that the video required by the user can be simply and efficiently synthesized, and the video production efficiency is improved.
Drawings
Fig. 1 is a flowchart of a video production method according to an embodiment of the present application;
FIG. 2 is a diagram of an image timeline provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a production interface provided by a first browser according to an embodiment of the present application;
FIG. 4 is a diagram of a crop box provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a second browser-provided authoring interface provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of a third authoring interface provided by a browser according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a fourth browser-provided authoring interface provided in accordance with an embodiment of the present application;
FIG. 8 is a process flow diagram of a video production method applied to GIF animation according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a video production apparatus according to an embodiment of the present application;
fig. 10 is a block diagram of a computing device according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The terminology used in the one or more embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the present application. As used in one or more embodiments of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments of the present application to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first aspect may be termed a second aspect, and, similarly, a second aspect may be termed a first aspect, without departing from the scope of one or more embodiments of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
First, the noun terms to which one or more embodiments of the present application relate are explained.
GIF (Graphics exchange Format): the GIF is a public image file format standard, is used for displaying index color images in a hypertext markup language mode, and is widely applied to the Internet and other online service systems. The GIF is divided into static GIF and animation GIF, the extension name is GIF, the GIF is in a compressed bitmap format, a transparent background image is supported, the GIF is suitable for various operating systems, the 'body type' is small, and many small animations on the network are in the GIF format. In fact, the GIF is a GIF graph in which a plurality of images are stored as one image file to form animation, and most commonly, the GIF graph is a smiling graph formed by connecting animation of one frame in series, so that the GIF is still in an image file format.
Web technology: the method refers to a related technology realized based on a browser, and all realization operations are completed in the browser.
A Canvas: the browser canvas is a technology used for rendering various image pictures by the browser.
FPS: the definition in the field of images refers to the number of frames transmitted per second of a picture, and in colloquial, refers to the number of pictures in animation or video (e.g., a video is 30FPS or 24 FPS). The FPS measures the amount of information used to store and display the motion video. The greater the number of frames per second, the more fluid the displayed motion will be. Some computer video formats can only provide 15FPS per second. The movie is played at a rate of 24 pictures per second, i.e. 24 still pictures are projected continuously on the screen within one second. The unit of the animation playing speed is FPS, where F is english word Frame, P is Per, and S is Second. Expressed in chinese is how many frames per second, or frames per second, a movie is 24FPS, often referred to simply as 24 frames.
In the present application, a video production method is provided, and the present application relates to a video production apparatus, a computing device, and a computer-readable storage medium, which are described in detail in the following embodiments one by one.
Fig. 1 shows a flowchart of a video production method according to an embodiment of the present application, which is applied in a browser and specifically includes the following steps:
step 102: and according to the uploaded image to be made and a preset display rule, establishing a corresponding image time axis for the image to be made in a making interface provided by the browser.
In practical application, if a video needs to be made, a corresponding application program needs to be additionally downloaded and installed, a user learns and masters the use course of the application program, and the making process of the whole video is complicated, so that the making threshold of the video is high, the time is long, the efficiency is low, and the rapid making and putting requirements of an advertiser cannot be met.
Therefore, the application provides an intelligent video production method based on a browser, which can create a corresponding image time axis for an image to be produced in a production interface provided by the browser according to an uploaded image to be produced and a preset display rule in the browser; under the condition that the movement operation aiming at the range selection control on the image time axis is received, determining the selected initial image range according to the movement information of the range selection control; receiving processing operation aiming at the initial image range, and determining a target image range according to the processing operation; and synthesizing a video according to the images included in the target image range. Therefore, the computer is provided with the browser to realize the video production without downloading and installing an independent application program, the threshold for producing the video is low, the time for producing the video is shortened, and the production efficiency is improved.
Specifically, the images to be created refer to images that are subsequently used to synthesize a video, the number of the images to be created depends on the number of images uploaded by a user, and the sources of different images to be created may be different. The preset display rule is a sequence rule for displaying images to be produced from different sources, and for example, the preset display rule may be to display an image extracted from a video first, then display a static image, and finally display an image obtained from an existing GIF image (dynamic or static).
In addition, the production interface provided by the browser may refer to an interface provided in the browser for producing a video, that is, a video production interface, and the production interface provided by the browser may include various controls required for producing the video. The image time axis is a time axis for displaying images to be produced in a thumbnail form according to a sequence, and each unit time on the time axis corresponds to one image to be produced, for example, one image corresponds to every 200 milliseconds, one image corresponds to every 400 milliseconds, or one image corresponds to every 1 second.
In an optional implementation manner of this embodiment, according to the uploaded image to be produced and a preset display rule, a corresponding image timeline is created for the image to be produced in a production interface provided by the browser, and a specific implementation process may be as follows:
determining the display sequence of the images to be manufactured according to the preset display rule;
determining the total number of the images to be made, and determining the display size of the images to be made according to the total number;
generating a thumbnail corresponding to the image to be manufactured based on the display size;
and displaying the thumbnails corresponding to the images to be made in a time axis form according to the display sequence, and creating the image time axis.
It should be noted that after each uploaded image to be produced is acquired, a corresponding image timeline may be created based on the image to be produced, and thumbnails corresponding to the images to be produced are displayed on the image timeline. The method comprises the steps of determining the size of a thumbnail according to the total number of images to be produced, and creating and displaying the corresponding thumbnail according to the size so as to obtain the image time axis.
In the method and the device, when the image time axis is initially established, a proper zooming size can be intelligently selected according to the total number of the images to be made (namely the total frame number included by the image time axis), so that better user experience is provided.
In an optional implementation manner of this embodiment, before creating the image timeline, a data source, that is, an image to be produced required by a composite video, needs to be acquired first, so that according to an uploaded image to be produced and a preset display rule, before creating a corresponding image timeline for the image to be produced in a production interface provided by a browser, the method further includes:
and acquiring the uploaded image to be made under the condition of receiving the data selection instruction.
Specifically, the data selection instruction refers to an instruction triggered by a user through an uploading control in a production interface provided by a browser, and the data selection instruction is used for acquiring an image to be produced uploaded by the user.
In an optional implementation manner of this embodiment, in the case that the data selection instruction is received, acquiring the uploaded image to be produced includes at least one of:
under the condition of receiving a first data selection instruction, acquiring a target video corresponding to the first data selection instruction, and extracting the image to be produced from the target video;
under the condition that a second data selection instruction is received, acquiring a target image corresponding to the second data selection instruction, and determining the target image as the image to be manufactured;
and under the condition of receiving a third data selection instruction, acquiring a GIF image corresponding to the third data selection instruction, and determining the GIF image as the image to be made.
Specifically, the first data selection instruction, the second data selection instruction, and the third data selection instruction may be instructions triggered by different uploading controls, where the first data selection instruction is used to upload a video, the second data selection instruction is used to upload a still picture, and the third data selection instruction is used to upload an existing GIF image (including a still GIF image and a GIF animation).
In addition, the first data selection instruction, the second data selection instruction, and the third data selection instruction may also be instructions triggered by an upload control, and when the upload control is triggered, a file type selected by a user is determined, if the file type is in a video format, at least one video frame is extracted from the video as the image to be produced, if the file type is in a picture format, the file is directly determined as the image to be produced, and if the file type is in a GIF format, each frame of image in the file is determined as the image to be produced.
It should be noted that, after the user triggers the upload control and selects the corresponding file, the image in the selected file may be displayed in the image preview area, and then the upload control may be triggered again to select the file for upload, where the file types selected in the previous and subsequent two times may be the same or different, and when the confirmation upload control in the production interface provided by the browser is triggered, all the images selected by the user may be used as the images to be produced to create the corresponding image timeline.
Illustratively, a video uploading control, a static image uploading control and a GIF animation uploading control are arranged in a production interface provided by a browser, when a user clicks the video uploading control, the user can select a video file to upload, extract a corresponding video frame from the video file, and display the video frame in an image preview area; when a user clicks the static image uploading control, an image file can be selected for uploading, and the uploaded image is displayed in the image preview area; when the user clicks the GIF animation uploading control, the GIF file can be selected to be uploaded, and each frame of image in the uploaded GIF animation is displayed in the image preview area. And after the user clicks a confirmation uploading control in a production interface provided by the browser, generating a corresponding image time axis.
The data source that can select when making the video in this application is various, can be based on current video resource, new video is synthesized to current GIF animation and static image, the user can select the video alone, one kind in GIF animation or the static image is as the data source, also can be after having selected the video, the GIF animation is reselected, or the static image adds in the time axis, thereby can select the image as final composite video's image in the different data sources simultaneously, make the user when making the video, the selection of data source is more nimble high-efficient, satisfy abundant GIF material and promote the demand.
In practical application, if the file selected by the user to be uploaded is a video, at least one video frame needs to be extracted from the video uploaded by the user to serve as an image to be produced. In a specific implementation, the extraction of the image to be produced from the target video may be as follows:
determining corresponding extraction fineness according to the playing frame number per second of the target video;
determining the time length of each frame of video to be extracted according to the extraction fineness;
and extracting the image to be made from the target video according to the time length of each frame of video.
Specifically, the extraction fineness refers to the number of frames per second expected to be acquired, and the duration of each frame can be calculated according to the number of frames per second expected to be acquired, so that one frame of video frame is extracted every interval of the duration, that is, the duration of each frame of video indicates how many seconds every other frame of video frame is extracted. In practical application, the corresponding extraction fineness can be determined according to the number of playing frames per second of the target video.
In addition, for target videos with different durations, different extraction finenesses can be adopted for extraction, that is, for a target video with a longer video duration, in order to control the number of extracted video frames, the extraction finenesses can be made thicker, that is, the duration of each frame of video is longer (one frame of video frame is extracted at a longer interval), and for a target video with a shorter video duration, in order to ensure that enough video frames are extracted, the extraction finenesses can be made thinner, that is, the duration of each frame of video is shorter (one frame of video frame is extracted at a shorter interval).
For example, the number of frames per second played in the target video is 24 frames, and the corresponding fineness of extraction may be the same as the number of frames per second played in the target video, that is, the extraction is performed at a speed of 24 frames per second, where the duration (ms) of each frame of video is 1000 ms/24 frames. Alternatively, it may be extracted at a rate of 30 Frames Per Second (FPS), 15 frames per second, or 10 frames per second, depending on the duration of the target video and the corresponding requirements.
In a possible implementation manner, the to-be-made image is extracted from the target video according to the duration of each frame of video, and the specific implementation process may be as follows:
playing the target video in real time, and calculating the time point of the next video frame in the target video;
skipping the playing progress of the target video to the time point, pausing the target video, and acquiring a current video frame;
rendering the current video frame to a canvas of a browser, converting the video frame on the canvas into an image for storage to obtain the image to be manufactured, and returning to the operation step of calculating the time point of the next video frame in the target video until the target video is played.
It should be noted that, after a user uploads a target video in a production interface provided by a browser, the target video can be played in real time through a video tag in the browser, then in the playing process, a time point of a next frame in the video for extracting a picture is calculated according to the calculated duration of each frame of video, the video tag is quickly jumped to the time point for playing, the video is immediately paused, then the picture of the paused video is rendered on a canvas of the browser, then a frame image on the canvas is converted into image data to be stored in an internal memory, and the steps are repeated until all the desired video frames are extracted, so that an image to be produced uploaded by the user is obtained.
In an optional implementation manner of this embodiment, after the creating the image time axis, the further scaling may be performed on the image time axis, that is, according to the uploaded image to be produced and the preset display rule, after the creating a corresponding image time axis for the image to be produced in the production interface provided by the browser, the method further includes:
and under the condition that a zooming instruction aiming at the image time axis is received, zooming the thumbnail displayed on the image time axis according to a zooming parameter carried by the zooming instruction.
Specifically, the zoom instruction is an instruction triggered by a preset zoom operation, and is used for zooming in or zooming out a thumbnail displayed on an image time axis, and the preset zoom operation may be scrolling a roller, clicking a zoom control, and the like; the zooming parameters refer to the zooming or zooming-out amplitude, and when a zooming instruction is triggered through a preset zooming operation, the zooming parameter can carry the corresponding zooming parameter, for example, a user puts a mouse on an image time axis, and rolls 3 lower rollers upwards, and at the moment, the zooming instruction aiming at the image time axis is received, and the zooming parameter carried by the zooming instruction is 30% zooming; or, an amplifying control and a reducing control are arranged below the generated image time axis, and a user clicks the amplifying control once to amplify by 10%, and clicks the reducing control once to reduce by 10%.
For example, fig. 2 is a schematic diagram of an image timeline provided in an embodiment of the present application, and as shown in fig. 2, the image timeline is displayed with different scales (i.e., a pre-zoom image timeline and a post-zoom image timeline).
The image time axis in the application also provides the function of zooming in and zooming out, so that a user can clearly view the thumbnail corresponding to the image when the number of frames is too large or too small.
In an optional implementation manner of this embodiment, the rendering of the currently selected image to a canvas area in a production interface provided by the browser may be performed to perform display and operation, that is, according to the uploaded image to be produced and the preset display rule, after creating a corresponding image time axis for the image to be produced in the production interface provided by the browser, the method further includes:
determining a selected target image on the image time axis, wherein each unit time on the image time axis corresponds to one image;
and rendering the target image to a canvas area in a production interface provided by the browser according to the size of the target image.
It should be noted that the currently selected target image may be rendered to a canvas area in a production interface provided by the browser, so as to facilitate a user to clearly preview the image of each frame, and facilitate a subsequent user to perform operation editing on the image of each frame.
In an optional implementation manner of this embodiment, determining the selected target image on the image time axis includes:
determining an image to be produced selected by a selection operation as the target image when the selection operation for the image to be produced on the image time axis is received;
and under the condition that the selection operation of the image to be produced on the image time axis is not received, determining the first frame image on the image time axis as the target image.
It should be noted that, if a user selects a certain image on the image time axis, the image selected by the user may be used as a target image, and the target image is subsequently rendered to a canvas area in a production interface provided by the browser, that is, the image selected by the user is displayed in the canvas area in the production interface provided by the browser for the user to preview and operate. If the user does not select the image on the image time axis, for example, when the image time axis is generated for initialization, the user does not select any image, at this time, the first frame image on the image time axis can be determined as the target image, that is, the first frame image on the image time axis is displayed in the layout area in the production interface provided by the browser, so that the user can preview and operate the target image.
In an optional implementation manner of this embodiment, the rendering of the target image to a canvas area in a production interface provided by the browser according to the size of the target image may be implemented as follows:
determining the width and height of the target image;
determining a width and a height of the canvas area;
scaling the target image to the canvas area.
It should be noted that, in the present application, after a corresponding image time axis is created according to an acquired image to be made, a current frame can be rendered to a canvas area on a page in real time, and a canvas tool in a browser can intelligently and automatically scale to a canvas horizontal/vertical edge according to an original proportion of the image, so that a user can edit each frame of image globally.
In an optional implementation manner of this embodiment, after creating, according to the uploaded image to be produced and the preset display rule, a corresponding image timeline for the image to be produced in a production interface provided by the browser, the method further includes:
and in the case of receiving a deletion operation, deleting the image indicated by the deletion operation in the image time axis.
It should be noted that after the corresponding image time axis is created, a deletion control is correspondingly arranged above each image on the image time axis, and when it is detected that the deletion control on a certain image is triggered, the image is deleted. In this way, when the continuous images are too close to each other, the mouse can move to the unnecessary frames, and the deleting control can be clicked, so that the useless frames can be deleted.
For example, fig. 3 is a schematic view of a production interface provided by a first browser according to an embodiment of the present application, and as shown in fig. 3, an "error number" is displayed on each image thumbnail on an image time axis, where the "error number" is a deletion control, and a user can delete a corresponding frame by clicking the "error number".
In an optional implementation manner of this embodiment, after generating the image timeline, a user may select a certain image in the image timeline to perform editing processing, that is, after creating a corresponding image timeline for the image to be made in a making interface provided by the browser according to the uploaded image to be made and a preset display rule, the method further includes:
and receiving an operation instruction of a first image in the image to be made, and generating a corresponding composite image.
Specifically, the first image is a selected image to be created, which is to be edited, the composite image is an image generated by editing the first image, and the operation instruction is an instruction for performing an editing operation on the image to be created (i.e., the first image) rendered to a canvas area in a creation interface provided by the browser, where the operation instruction may be an operation such as a clipping operation, an operation for adding a special effect, and an operation for adding a character.
In an optional implementation manner of this embodiment, the operation instruction may be a clipping instruction, that is, the operation instruction may be to clip an image to be made (that is, a first image) rendered in a canvas area to obtain a corresponding composite image, and if the image to be made in the canvas area is to be clipped, a clipping frame needs to be displayed in the canvas area, that is, the operation instruction for the first image in the image to be made is received, and a corresponding composite image is generated, where a specific implementation process may be as follows:
displaying a cropping frame in the canvas area, the cropping frame being located within the image area of the target image, the cropping frame having an area no greater than the target image;
and receiving control operation aiming at the cropping frame, and generating a synthetic image corresponding to the target image according to the control operation.
It should be noted that, the user can select the image size and content of each frame of image of the generated video by using the dragging, zooming and reducing functions of the cropping frame, and the browser can intelligently limit the cropping area to be only within the range of the original target image according to the size of the target image and the dragging position of the user, so as to avoid that the generated video has blank areas due to the misoperation of the user.
For example, fig. 4 is a schematic diagram of a cropping frame according to an embodiment of the present application, where the cropping frame is located in an actual area of an image as shown in fig. 4, and the cropping frame can be displayed in the cropping frame so that the cropping frame cannot be dragged to an area outside the image.
In practical applications, in one possible implementation, displaying the cropping box in the canvas area includes: determining the target image as a bottom image of the canvas area; adding a cropping frame on the bottom layer image, wherein the cropping frame and the target image are positioned on different layers; and adding a masking layer between the bottom layer image and the cropping frame, wherein the masking layer is used for distinguishing selected areas and unselected areas of the cropping frame.
It should be noted that, the currently selected image (i.e. the target image) may be rendered on a canvas area (canvas) in real time as a base map, a cropping frame may be added on the base map, and rendered on the canvas area as different layers, the cropping frame may be dragged in size and position in proportion, a semi-transparent masking layer may be further disposed between the base map and the cropping frame to distinguish the selected area and the unselected area of the cropping frame, and the unselected area may be set as a semi-transparent dark color.
In one possible implementation manner, receiving a control operation for the cropping frame, and generating a synthetic image corresponding to the target image according to the control operation includes:
determining the position of the cropping frame according to the control operation for the cropping frame;
determining a positional parameter of the cropping frame relative to the underlying image;
acquiring image data in the cropping frame in the target image according to the position parameter;
and determining the acquired image data as the composite image, displaying the composite image in the cropping frame, and displaying the composite image in a cropping preview area in a production interface provided by the browser.
Specifically, the position parameters required for copying the cropping frame may include: left (pixel distance to the left of the target image), top (pixel distance to the top of the target image), width (cropping width), height (cropping height). In practical application, left is equal to the distance between the clipping box and the left side of the canvas area minus the distance between the bottom layer image and the left side of the canvas area; top equals the distance of the cropping box from the top of the canvas area minus the distance of the bottom layer image from the top of the canvas area; the width is equal to the zoomed width of the clipping frame; height is equal to the scaled height of the cropping box. The image data on the bottom layer image corresponding to the clipping frame area can be determined through the parameters, so that the image data in the clipping frame is copied and displayed on the clipping frame, and is displayed in a clipping preview area in a production interface provided by a browser.
It should be noted that, the position of the cropping frame may be obtained according to the real-time dragging of the mouse, and the position of the cropping frame in the canvas area relative to the underlying image is calculated, and the image data corresponding to the position is copied out by the position corresponding to the position of the target image on the underlying image, and is pasted and applied to the position of the cropping frame area, so that the brightness contrast between the selected area and the unselected area may be achieved, and simultaneously, the image data copied out by the cropping frame may be separately placed in the cropping preview area in the production interface provided by the browser, for the user to preview whether the finally cropped image meets the requirement.
In an optional implementation manner of this embodiment, each operation on the first image in the image to be produced may also be recorded, so as to facilitate subsequent further processing, that is, after generating a corresponding composite image according to the operation instruction on the first image in the image to be produced, the method further includes:
under the condition that an operation instruction for a first image in the images to be manufactured is received, storing image data operated by the operation instruction;
determining the operation type of the operation instruction;
adding the operation instruction and the corresponding operation type into an operation list;
and setting an instruction index corresponding to the newly added operation instruction according to the instruction index of the operation instruction included in the operation list.
In an optional implementation manner of this embodiment, after receiving an operation instruction for a first image in the image to be produced and generating a corresponding composite image, the method further includes:
under the condition that a withdrawal instruction is received, the operation corresponding to the current operation instruction is withdrawn, and the synthetic image displayed in the canvas area in the manufacturing interface provided by the browser is restored to the state before the operation.
In an optional embodiment, canceling an operation corresponding to a current operation instruction, and restoring a composite image displayed in a canvas area in a production interface provided by the browser to a state before the operation includes:
determining the operation type of the current operation instruction under the condition of receiving the withdrawal instruction;
determining a target operation instruction of the operation type in the operation list;
acquiring image data corresponding to the target operation instruction, and updating an instruction index of the current operation instruction to be the instruction index of the target operation instruction;
and restoring the canvas area in the production interface provided by the browser into the image corresponding to the image data.
It should be noted that, in order to prevent the user from performing misoperation, the user may record each operation instruction of the image to be created, so as to implement a function of quickly revoking the previous operation and a function of quickly redoing and revoking the related operation. That is, each time the user drags the cropping frame or zooms the cropping frame, the corresponding operation instruction is recorded, and the image data after the corresponding operation is performed is used for subsequent image restoration. In particular, operations that can be undone/redone include a change in the size and position of the cropping frame, a target image deletion operation, an operation of selecting an image range, and the like. The user can click the cancel control on the production interface provided by the browser to perform cancel/redo operation, and can also perform such cancel operation more conveniently through the keyboard shortcut key (which can be consistent with cancel/redo shortcut keys of other conventional application programs, and reduces the cost for the user to remember the shortcut key).
In practical application, all effective operation instructions can be recorded through an operation list, each operation instruction has a corresponding operation type, so that when the operation is cancelled, the operation of the previous same type (such as the operation of changing the size and the position of the cropping frame) can be found, an instruction index can point to the current operation instruction, and the instruction index can point to any operation instruction in the operation list.
For example, there are currently 3 operation instructions [ a1, b1, a2], where a1 and a2 are operation instructions of the same type, and b1 is an operation instruction of another type, when a user has a new operation, assuming that the new operation instruction is b2, the latest data related to operation b2 is added to the tail end of the operation list, that is, [ a1, b1, a2, b2], at this time, the instruction index of b2 is 3 (the index starts with 0, and the fourth instruction index is 3), when the user clicks to cancel, the instruction index of the previous operation instruction of the same type (that is, the instruction index 1 corresponding to b 1) is found, the image data corresponding to the instruction index is obtained, the recovery processing is performed, and the instruction index of the current operation instruction is updated to 1, thereby completing a cancelled operation.
In addition, the method and the device provide the capability of performing frame deletion on the unnecessary target images, after the corresponding images are deleted, the remaining images can be updated to new indexes at the same time, and the rendered images can be synchronously updated in the canvas area.
In an optional implementation manner of this embodiment, after rendering the target image to a canvas area in a production interface provided by the browser according to the size of the target image, the method further includes:
and when a switching operation is received, rendering an image indicated by the switching operation to a canvas area in a production interface provided by the browser.
It should be noted that the image rendered in the current canvas area may be switched through a switching control provided in the production interface provided by the browser, where the switching control may be a previous frame/next frame control, and the switched image is re-rendered to the canvas area in the production interface provided by the browser for the user to browse.
In addition, an image time axis corresponding to the image to be produced is created, after the current frame is rendered to a canvas area on a page, a user can perform editing operation such as cutting operation on the image to be produced displayed in the canvas area, and in order to facilitate the user to quickly preview the effect of a subsequently generated video, a preview function can be provided in the application.
Step 104: and under the condition that the movement operation aiming at the range selection control on the image time axis is received, determining the selected initial image range according to the movement information of the range selection control.
Specifically, on the basis of creating a corresponding image time axis for the image to be produced in a production interface provided by the browser according to the uploaded image to be produced and a preset display rule, further, under the condition that a moving operation for a range selection control on the image time axis is received, a selected initial image range is determined according to moving information of the range selection control. The range selection control refers to a control for selecting an image range in the image timeline, and the control can be one or two selection controls.
In an optional implementation manner of this embodiment, the range selection control includes a first selection control and a second selection control; according to the movement information of the range selection control, determining the selected initial image range, wherein the specific implementation process can be as follows:
determining a first stop position of the first selection control and a second stop position of the second selection control;
determining an image to be produced corresponding to the first stop position as a start image, and determining an image to be produced corresponding to the second stop position as an end image;
determining a range between the start image and the end image as the selected initial image range.
It should be noted that after the images to be made uploaded by the user are acquired, each image to be made is converted into a local picture link and displayed on the image time axis in the form of a picture thumbnail, and each block (i.e., each unit time) on the image time axis is each frame of image. The image time axis can be provided with two selection controls, a user can select the required initial image range by dragging the two selection controls, one selection control is used for indicating the starting position of the selected initial image range, and the other selection control is used for indicating the ending position of the selected initial image range.
Along the above example, as shown in fig. 3, two range selection controls are further arranged on the image time axis, and an image between the two range selection controls is an initial image range selected by a user.
In one possible implementation, determining the first stop position of the first selection control includes:
determining a first distance of the first selection control relative to an interface boundary of a production interface provided by the browser;
determining a scroll distance of the image timeline;
determining a second distance of the first region boundary of the image time axis relative to an interface boundary of a production interface provided by the browser;
determining a movement distance of the first selection control relative to a starting position of the image timeline according to the first distance, the scroll distance, and the second distance;
determining the movement distance as a first stop position of the first selection control.
Specifically, the interface boundary of the production interface provided by the browser may be a left side boundary with reference to a horizontal direction, and accordingly, the first region boundary of the image time axis is a left end of the image time axis, the scroll distance of the image time axis is a distance of an image that is not displayed before the currently displayed image along the scroll direction of the image time axis, that is, the scroll distance of the image time axis refers to a distance of forward scrolling of the image. It should be noted that, the above-mentioned calculation method is a calculation method of scrolling to the left with the left side of the image time axis as a starting point, and of course, in practical applications, the image time axis may also scroll to the right with the right side as a starting point, and in this case, the interface boundary of the creation interface provided by the browser may be a right side boundary with the horizontal direction as a reference, and accordingly, the first region boundary of the image time axis is the right end of the image time axis.
In practical application, after a user drags a first selection control, the browser can calculate the position of the first selection control relative to a production interface provided by the browser, the position of the first selection control relative to an image time axis and the length of the time axis in real time, so as to calculate the moving distance of the first selection control relative to the initial position of the image time axis (namely, the distance of the first selection control relative to the leftmost side of a rolling area of the time axis), and determine a first stop position of the first selection control, wherein the specific calculation formula is as follows: the first selection control moves a distance d from the starting position of the image timeline equal to a first distance (d3) of the first selection control from the left side of the browser-provided authoring interface) + a scrolling distance (d2) of the image timeline-a second distance (d1) from the left end of the region of the image timeline from the left side of the browser-provided authoring interface.
For example, fig. 5 is a schematic diagram of a second browser-provided authoring interface provided in the present application, where as shown in fig. 5, d1 refers to a second distance from the left end of the region of the image timeline to the left side of the browser-provided authoring interface, d2 refers to a scrolling distance of the image timeline, and d3 refers to a first distance from the first selection control to the left side of the browser-provided authoring interface. As shown in fig. 5, if the length of the image time axis is 10S, the production interface provided by the browser in the figure is only displayed for 0-800ms, a user can display an image that is not currently displayed by scrolling the image time axis, and after the image time axis is scrolled, the time at the leftmost end of the image time axis displayed in the production interface provided by the browser does not correspond to the start time of 0S.
It should be noted that a specific implementation process for determining the second stop position of the second selection control is the same as that for determining the first stop position of the first selection control, and is not described herein again. By the method, the moving distance of the two selection controls relative to the initial position of the image time axis can be calculated, and the calculated moving distance can be stored so as to facilitate the direct reference of the next dragging.
In one possible implementation, determining an image to be produced corresponding to the first stop position as a start image includes:
mapping the first stop position to the image time axis, and determining a corresponding start frame index;
determining a corresponding image to be manufactured according to the starting frame index;
and rendering the image to be manufactured to a canvas area in a manufacturing interface provided by the browser.
It should be noted that the first stop position may be mapped to a corresponding start frame index on the image timeline, and then the image to be produced corresponding to the start frame index is rendered to a canvas area in the production interface provided by the browser. Similarly, an ending frame index corresponding to the second stop position may be determined, so that the image to be manufactured corresponding to the ending frame index may be rendered to a canvas area in a manufacturing interface provided by the browser. In a specific implementation, when each image is displayed in the image time axis, each image occupies a certain width of the image time axis, and based on this, the first stop position may be divided by the width occupied by each image, and then rounding is performed, so that the corresponding start frame index may be obtained.
In addition, the determined start frame index and the end frame index in the image range can be synchronously updated to the canvas area, and the canvas area can render the image in the selected area in the image time axis through the new index.
In an optional implementation manner of this embodiment, the initial image range selected is determined according to the movement information of the range selection control, and a specific implementation process may also be as follows:
under the condition that the selection operation aiming at the target image to be manufactured is received, the range selection control is moved to the position corresponding to the target image to be manufactured;
under the condition of receiving a range selection operation, selecting an image in a preset range after the target image to be manufactured is selected by taking the target image to be manufactured as a starting image;
and determining the selected image as the selected initial image range.
It should be noted that the preset range is a preset image selection range, and the preset range may be 5 seconds, 10 seconds, 15 seconds, and the like. The range selection control can also only comprise one selection control, a user can move the range selection control to a position corresponding to a target image to be manufactured by clicking a certain target image to be manufactured on the image time axis, then automatically start from the selected target image to be manufactured by clicking the intercepting control, select the image in the subsequent preset range to serve as the initial image range selected by the user, and if the image behind the target image to be manufactured selected by the user is not in the preset range, the last image is selected.
That is to say, when the image timeline is in an unselected state, the user can click any image to quickly jump to the frame position, the range selection control is also synchronously moved to the position, meanwhile, the canvas area on the image timeline can also update the frame image clicked by the user in real time, and after the user clicks the capture control below the image timeline, the frame target image clicked by the user can be used as an initial position, and the image in the following preset range (5-second length range) are selected by default. In addition, after the image in the preset range is automatically selected from the target image, the user can also finely adjust the selected image range by dragging the range selection control.
For example, fig. 6 is a schematic diagram of a production interface provided by a third browser according to an embodiment of the present application, and as shown in fig. 6, when the image timeline is in an unselected state, a user may click a certain target image on the image timeline (that is, click a certain image to jump to the certain frame), the canvas may synchronously render a frame corresponding to the target image, and the progress bar is moved to a corresponding position.
In an optional implementation manner of this embodiment, after determining the selected initial image range according to the movement information of the range selection control, the method further includes:
identifying the selected image range.
It should be noted that, in the present application, the image range selected by the user may be identified, so that the user may clearly see the image range selected by the user. In specific implementation, the image range selected by the user can be highlighted, and the transparency of the image in the image range selected by the user can be changed to be distinguished from the unselected image. Of course, other identification manners may be used in practical applications, and the present application does not limit this.
Step 106: and receiving processing operation aiming at the initial image range, and determining a target image range according to the processing operation.
Specifically, when a moving operation for a range selection control on the image time axis is received, a processing operation for the initial image range is received and a target image range is determined according to the processing operation on the basis of determining the selected initial image range according to the moving information of the range selection control. The processing operation for the initial image range refers to a processing operation performed on the whole corresponding to all the images included in the selected initial image range, and the processing operation may be an operation of updating the selected image range, such as an operation of dragging the whole selection frame; the operation may also be an operation of deleting all images included in the initial image range; in addition, the target image range refers to an image range finally selected by the user, and the required video can be synthesized based on the images in the target image range.
In an optional implementation manner of this embodiment, a processing operation for the initial image range is received, and a target image range is determined according to the processing operation, and a specific implementation process may be as follows:
receiving a moving operation of a selection frame corresponding to the initial image range;
determining an update start image and an update end image according to the position of the moved selection frame;
and determining the target image range according to the update starting image and the update ending image.
It should be noted that, after the user selects the initial image range, a selection frame (the selection frame includes images in the initial image range) is formed, the user may drag the selection frame (i.e., the selected image range), and adjust the positions of the start image and the end image selected by the user by adjusting the position of the selection frame, so as to update the image range selected by the user, and determine the target image range required by the final composite video.
In an optional implementation manner of this embodiment, a processing operation for the initial image range is received, and a target image range is determined according to the processing operation, where a specific implementation process may also be as follows:
receiving a deletion operation for an image included in the initial image range;
deleting images included in the initial image range in the image timeline;
and determining the residual images in the image time axis as the target image range.
It should be noted that, after the user selects the initial image range, a selection frame (the selection frame includes images in the initial image range) is formed, and the user may directly delete all images included in the selection frame (i.e., the selected image range), and determine the remaining images as the target image range required by the final composite video. Therefore, when the image frames needing to be deleted are connected and the number of the image frames is large, the image processing efficiency can be improved by selecting the image range and deleting the image frames at one time, and the video synthesis efficiency can be improved.
In an optional implementation manner of this embodiment, after determining the target image range according to the processing operation, the method further includes:
acquiring preview parameters under the condition of receiving a preview instruction;
and previewing the video corresponding to the image included in the target image range on a production interface provided by the browser according to the preview parameter.
Specifically, the preview parameter may refer to a speed at which the image is played.
It should be noted that, after the corresponding image time axis is created for the image to be created in the creation interface provided by the browser, an operation may be performed on a first image in the image to be created on the time axis to obtain a corresponding synthesized image, so that when a video corresponding to an image included in a target image range needs to be previewed after a target image range is determined, the target image range may include a processed synthesized image and an unprocessed second image, and at this time, the synthesized image in the image range and the video corresponding to the second image are actually previewed on the creation interface provided by the browser.
The second image is an image other than the first image in the image range to be created, that is, the image range may include the first image on which the editing operation is performed and the second image on which the editing operation is not performed, and when previewing the image in the image range, the composite image after the editing operation should be displayed for the first image, and the second image (i.e., the original image) can be directly displayed because the editing operation is not performed for the second image.
It should be noted that, a user may directly click a play button in a production interface provided by the browser to preview a video effect corresponding to images in all selected target image ranges, so as to determine whether a final video generation effect is expected, and when the browser plays an image in the selected target image range, the speed of playing may be determined according to a delay parameter (i.e., a preview parameter) set by the user. That is, when the image time axis is in the selected state, the preview playing function will define that the corresponding image is previewed in the target image range selected by the user, and will not preview other unselected images.
Illustratively, the images include image 1, image 2, image 3, image 4, and image 5 on the image time axis. Assuming that the target image range selected by the user is image 2-image 4, and the cropping operation is performed on image 2 and image 3, then image 2 and image 3 are the first image, image 4 is the second image, and when a preview instruction is received, a composite image obtained by performing the cropping operation on image 2 and image 3 and image 4 are displayed on a production interface provided by a browser for the user to preview.
In a possible implementation manner, according to the preview parameter, previewing a video corresponding to an image included in the range of the target image on a production interface provided by the browser, includes:
determining the playing delay time length according to the preview parameter;
determining a frame index of a starting image and a frame index of an ending image in the target image range, and determining the frame index of the starting image as a preview frame index;
rendering the image corresponding to the preview frame index in a preview window in a production interface provided by the browser;
and after the playing delay time, enabling the preview frame index to increase by 1, and returning to execute the operation step of rendering the image corresponding to the preview frame index in a preview window in a production interface provided by the browser until the preview frame index is equal to the frame index corresponding to the ending image.
It should be noted that, in order to facilitate the user to quickly preview the video effect, the present application also provides a real-time preview animation function, the browser may record a preview frame index of an image currently rendered in the canvas area (i.e., the position of the current image), when the user clicks on the play, the browser may determine the play rate, and determine the corresponding play delay duration (i.e., previewing at the speed of how many frames per second) according to the play rate, when the current image is rendered, and after the play delay duration, the preview index is increased by 1 by itself, the next image is rendered, and so on until all rendering of the image to be previewed is completed.
For example, when the preview parameter is 10FPS, the playback delay time is 100 ms, when the current image is rendered and after 100 ms delay, the preview frame index is increased by 1, and the next image is rendered, and so on until all the required images are rendered.
In an optional implementation manner of this embodiment, after previewing a video corresponding to an image included in the target image range on a production interface provided by the browser according to the preview parameter, the method further includes:
determining a stop position of a drag operation for a display progress bar in case of receiving the drag operation;
determining an image corresponding to the stop position;
and previewing the image on a production interface provided by the browser.
It should be noted that, in the present application, the image corresponding to a certain time point may be quickly skipped by clicking/dragging the progress bar, if the image corresponding to the time point is the first image on which the editing operation is performed, the synthesized image after the editing operation is displayed, and if the image corresponding to the time point is the second image on which the editing operation is not performed, the second image is directly displayed. In specific implementation, a certain frame of the time axis is clicked to be quickly positioned to the position of the frame, and the picture corresponding to the frame is rendered on the canvas synchronously through clicking the obtained index.
For example, fig. 7 is a schematic view of a production interface provided by a fourth browser according to an embodiment of the present application, and as shown in fig. 7, the production interface provided by the browser includes a video/GIF/picture selection area, and a user can upload a required image to be produced through the area; the production interface provided by the browser also comprises a cancel/redo area, and a user can cancel the current operation through the area; the manufacturing interface provided by the browser also comprises a canvas area, and the canvas area is used for rendering the image selected by the user; the production interface provided by the browser also comprises a progress bar for playing animation, and the progress bar is dragged to quickly position to a certain frame of image; the production interface provided by the browser also comprises an image time axis, the image time axis can display all images to be produced uploaded by a user, the image time axis comprises a range selection control, and the user can customize an image range required by the synthesized video by dragging the range selection control; in addition, the browser provides a production interface which also comprises a delay setting area to control the preview parameters.
Step 108: and synthesizing a video according to the images included in the target image range.
Specifically, on the basis of receiving a processing operation for the initial image range and determining a target image range according to the processing operation, a video is further synthesized according to images included in the target image range. The images included in the target image range may include a composite image obtained by performing operation processing on the first image and/or a second image which is not subjected to operation processing, and the second image is an image of the to-be-produced image except the first image.
In practical application, a user can define an image range to be previewed by a user, then a video effect generated by an image (a composite image and/or a second image) in the image range is previewed, the user can return to adjust the selected image range, preview parameters and the like according to the previewing effect, or edit an image again or withdraw a corresponding operation instruction until the previewing effect meets the user expectation, the user can click a generation control, a corresponding video is generated and derived based on the image in the currently selected target image range, and then advertisement delivery of the video can be performed through a corresponding platform.
For example, the user may select the 1 st to 10 th images, preview the video effect corresponding to the image included in the range, then select the 5 th to 12 th images, preview the video effect corresponding to the image included in the range, further select the 3 rd to 8 th images, preview the video effect corresponding to the image included in the range, compare several animation effects, and generate and derive the final video based on the images corresponding to the 5 th to 12 th images.
According to the video production method, a corresponding image time axis can be established for the image to be produced in a production interface provided by the browser according to the uploaded image to be produced and a preset display rule; under the condition that the movement operation aiming at the range selection control on the image time axis is received, determining the selected initial image range according to the movement information of the range selection control; receiving processing operation aiming at the initial image range, and determining a target image range according to the processing operation; and synthesizing a video according to the images included in the target image range. Under the condition, the intelligent video production method based on the browser is provided, the production of the video can be realized by installing the browser on the computer, a separate application program does not need to be downloaded and installed, the threshold for producing the video is low, the time for producing the video is shortened, and the production efficiency is improved. In addition, the required image range can be customized through the image time axis, and then the corresponding video is synthesized according to the image in the customized image range, so that the video required by the user can be simply and efficiently synthesized, and the video production efficiency is improved. Moreover, the method provides various rich functions like intelligently cutting images, deleting useless images, withdrawing current operation instructions, previewing animation effects in real time and the like, is simple to operate, and greatly improves the efficiency of making videos.
The following description will further describe the video production method provided by the present application in the application of GIF animation with reference to fig. 8. Fig. 8 shows a processing flow chart of a video production method applied to a GIF animation according to an embodiment of the present application, which specifically includes the following steps:
step 802: a user enters a GIF animation interface, selects a video/existing GIF animation through an uploading control on the GIF animation interface, and extracts an image of the video or the existing GIF animation; and/or selecting a plurality of still images, each still image being treated as a frame image.
Step 804: rendering all the acquired images to an image time axis; rendering the currently selected image in the canvas area, and performing undo/redo operation on the currently rendered image in the canvas area; in the canvas area, adjusting the size of the GIF through a clipping box; performing operations of selecting an image range, deleting frames and the like on an image time axis; and previewing the animation effect in real time.
Step 806: and determining whether the animation effect is in accordance with the expectation, if so, executing the following step 808, otherwise, returning to the step 804, or emptying the currently uploaded image and returning to the step 802 again.
Step 808: and generating a corresponding GIF animation, storing and putting.
The application provides an intelligent video production method based on a browser, the production of the GIF animation can be realized by installing the browser on a computer, an independent application program does not need to be downloaded and installed, the threshold for producing the GIF animation is low, the time for producing the GIF animation is shortened, and the production efficiency is improved. In addition, the required image range can be customized through the image time axis, and then the corresponding GIF animation is synthesized according to the image in the customized image range, so that the GIF animation required by the user can be simply and efficiently synthesized, and the production efficiency of the GIF animation is improved. Moreover, various rich functions like intelligently cutting images, deleting useless images, withdrawing current operation instructions, previewing animation effects in real time and the like are provided, the operation is simple, and the efficiency of manufacturing GIF animations is greatly improved.
Corresponding to the above method embodiment, the present application further provides an embodiment of a video production apparatus, and fig. 9 shows a schematic structural diagram of a video production apparatus provided in an embodiment of the present application. As shown in fig. 9, the apparatus includes:
a creating module 902, configured to create, according to the uploaded image to be created and a preset display rule, a corresponding image timeline for the image to be created in a creation interface provided by the browser;
a first determining module 904 configured to determine, in a case where a moving operation for a range selection control on the image timeline is received, a selected initial image range according to movement information of the range selection control;
a second determining module 906 configured to receive a processing operation for the initial image range and determine a target image range according to the processing operation;
a compositing module 908 configured to composite a video from images included within the target image range.
Optionally, the apparatus further comprises an upload module configured to:
and acquiring the uploaded image to be made under the condition of receiving the data selection instruction.
Optionally, the upload module is further configured to at least one of:
under the condition of receiving a first data selection instruction, acquiring a target video corresponding to the first data selection instruction, and extracting the image to be produced from the target video;
under the condition that a second data selection instruction is received, acquiring a target image corresponding to the second data selection instruction, and determining the target image as the image to be manufactured;
and under the condition of receiving a third data selection instruction, acquiring a GIF image corresponding to the third data selection instruction, and determining the GIF image as the image to be made.
Optionally, the creating module 902 is further configured to:
determining the display sequence of the images to be manufactured according to the preset display rule;
determining the total number of the images to be made, and determining the display size of the images to be made according to the total number;
generating a thumbnail corresponding to the image to be manufactured based on the display size;
and displaying the thumbnails corresponding to the images to be made in a time axis form according to the display sequence, and creating the image time axis.
Optionally, the apparatus further comprises a scaling module configured to:
and under the condition that a zooming instruction aiming at the image time axis is received, zooming the thumbnail displayed on the image time axis according to a zooming parameter carried by the zooming instruction.
Optionally, the range selection control comprises a first selection control and a second selection control; the first determination module 904 is further configured to:
determining a first stop position of the first selection control and a second stop position of the second selection control;
determining an image to be produced corresponding to the first stop position as a start image, and determining an image to be produced corresponding to the second stop position as an end image;
determining a range between the start image and the end image as the selected initial image range.
Optionally, the first determining module 904 is further configured to:
under the condition that the selection operation aiming at the target image to be manufactured is received, the range selection control is moved to the position corresponding to the target image to be manufactured;
under the condition of receiving a range selection operation, selecting an image in a preset range after the target image to be manufactured is selected by taking the target image to be manufactured as a starting image;
and determining the selected image as the selected initial image range.
Optionally, the second determining module 906 is further configured to:
receiving a moving operation of a selection frame corresponding to the initial image range;
determining an update start image and an update end image according to the position of the moved selection frame;
and determining the target image range according to the update starting image and the update ending image.
Optionally, the second determining module 906 is further configured to:
receiving a deletion operation for an image included in the initial image range;
deleting images included in the initial image range in the image timeline;
and determining the residual images in the image time axis as the target image range.
Optionally, the apparatus further comprises a preview module configured to:
acquiring preview parameters under the condition of receiving a preview instruction;
and previewing the video corresponding to the image included in the target image range on a production interface provided by the browser according to the preview parameter.
The application provides an intelligent video making device based on browser, the preparation that the browser can realize the video is equipped with on the computer, need not to download installation solitary application, and the threshold of preparation video is lower, has reduced the time of preparation video, has improved preparation efficiency. In addition, the required image range can be customized through the image time axis, and then the corresponding video is synthesized according to the image in the customized image range, so that the video required by the user can be simply and efficiently synthesized, and the video production efficiency is improved. Moreover, the method provides various rich functions like intelligently cutting images, deleting useless images, withdrawing current operation instructions, previewing animation effects in real time and the like, is simple to operate, and greatly improves the efficiency of making videos.
The above is a schematic scheme of a video production apparatus of the present embodiment. It should be noted that the technical solution of the video creation apparatus and the technical solution of the video creation method belong to the same concept, and details that are not described in detail in the technical solution of the video creation apparatus can be referred to the description of the technical solution of the video creation method.
Fig. 10 shows a block diagram of a computing device 1000 according to an embodiment of the present application. The components of the computing device 1000 include, but are not limited to, memory 1010 and a processor 1020. The processor 1020 is coupled to the memory 1010 via a bus 1030 and the database 1050 is used to store data.
Computing device 1000 also includes access device 1040, access device 1040 enabling computing device 1000 to communicate via one or more networks 1060. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. Access device 1040 may include one or more of any type of network interface, e.g., a Network Interface Card (NIC), wired or wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present application, the above-described components of computing device 1000 and other components not shown in FIG. 10 may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 10 is for purposes of example only and is not limiting as to the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 1000 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), a mobile phone (e.g., smartphone), a wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 1000 may also be a mobile or stationary server.
Wherein, the processor 1020 is configured to execute the following computer-executable instructions:
creating a corresponding image time axis for the image to be manufactured in a manufacturing interface provided by the browser according to the uploaded image to be manufactured and a preset display rule;
under the condition that the movement operation aiming at the range selection control on the image time axis is received, determining the selected initial image range according to the movement information of the range selection control;
receiving processing operation aiming at the initial image range, and determining a target image range according to the processing operation;
and synthesizing a video according to the images included in the target image range.
The above is an illustrative scheme of a computing device of the present embodiment. It should be noted that the technical solution of the computing device and the technical solution of the video production method belong to the same concept, and details that are not described in detail in the technical solution of the computing device can be referred to the description of the technical solution of the video production method.
An embodiment of the present application further provides a computer-readable storage medium, which stores computer-executable instructions, which are executed by a processor, for implementing the operation steps of the video production method.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium belongs to the same concept as the technical solution of the video production method, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the video production method.
The foregoing description of specific embodiments of the present application has been presented. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, etc. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present application disclosed above are intended only to aid in the explanation of the application. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and its practical applications, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and their full scope and equivalents.

Claims (13)

1. A video production method is applied to a browser and comprises the following steps:
creating a corresponding image time axis for the image to be manufactured in a manufacturing interface provided by the browser according to the uploaded image to be manufactured and a preset display rule;
under the condition that the movement operation aiming at the range selection control on the image time axis is received, determining the selected initial image range according to the movement information of the range selection control;
receiving processing operation aiming at the initial image range, and determining a target image range according to the processing operation;
and synthesizing a video according to the images included in the target image range.
2. The video production method according to claim 1, wherein before creating a corresponding image timeline for the image to be produced in a production interface provided by the browser according to the uploaded image to be produced and a preset display rule, the method further comprises:
and acquiring the uploaded image to be made under the condition of receiving the data selection instruction.
3. The video production method according to claim 2, wherein, in a case where the data selection instruction is received, acquiring the uploaded image to be produced includes at least one of:
under the condition of receiving a first data selection instruction, acquiring a target video corresponding to the first data selection instruction, and extracting the image to be produced from the target video;
under the condition that a second data selection instruction is received, acquiring a target image corresponding to the second data selection instruction, and determining the target image as the image to be manufactured;
and under the condition of receiving a third data selection instruction, acquiring a GIF image corresponding to the third data selection instruction, and determining the GIF image as the image to be made.
4. The video production method according to any one of claims 1 to 3, wherein creating a corresponding image timeline for the image to be produced in a production interface provided by the browser according to the uploaded image to be produced and a preset display rule includes:
determining the display sequence of the images to be manufactured according to the preset display rule;
determining the total number of the images to be made, and determining the display size of the images to be made according to the total number;
generating a thumbnail corresponding to the image to be manufactured based on the display size;
and displaying the thumbnails corresponding to the images to be made in a time axis form according to the display sequence, and creating the image time axis.
5. The video production method according to any one of claims 1 to 3, wherein, after creating a corresponding image timeline for the image to be produced in a production interface provided by the browser according to the uploaded image to be produced and a preset display rule, the method further comprises:
and under the condition that a zooming instruction aiming at the image time axis is received, zooming the thumbnail displayed on the image time axis according to a zooming parameter carried by the zooming instruction.
6. A method of producing a video according to any of claims 1 to 3 wherein the range selection control comprises a first selection control and a second selection control; determining the selected initial image range according to the movement information of the range selection control, wherein the method comprises the following steps:
determining a first stop position of the first selection control and a second stop position of the second selection control;
determining an image to be produced corresponding to the first stop position as a start image, and determining an image to be produced corresponding to the second stop position as an end image;
determining a range between the start image and the end image as the selected initial image range.
7. A method of producing a video according to any one of claims 1 to 3, wherein determining the selected initial image range based on the movement information of the range selection control comprises:
under the condition that the selection operation aiming at the target image to be manufactured is received, the range selection control is moved to the position corresponding to the target image to be manufactured;
under the condition of receiving a range selection operation, selecting an image in a preset range after the target image to be manufactured is selected by taking the target image to be manufactured as a starting image;
and determining the selected image as the selected initial image range.
8. A method of video production according to any of claims 1 to 3 wherein receiving a processing operation for the initial image range and determining a target image range in dependence on the processing operation comprises:
receiving a moving operation of a selection frame corresponding to the initial image range;
determining an update start image and an update end image according to the position of the moved selection frame;
and determining the target image range according to the update starting image and the update ending image.
9. A method of video production according to any of claims 1 to 3 wherein receiving a processing operation for the initial image range and determining a target image range in dependence on the processing operation comprises:
receiving a deletion operation for an image included in the initial image range;
deleting images included in the initial image range in the image timeline;
and determining the residual images in the image time axis as the target image range.
10. A method of video production according to any of claims 1 to 3, wherein, after determining the target image range according to the processing operation, further comprising:
acquiring preview parameters under the condition of receiving a preview instruction;
and previewing the video corresponding to the image included in the target image range on a production interface provided by the browser according to the preview parameter.
11. A video production apparatus, applied to a browser, comprising:
the creating module is configured to create a corresponding image time axis for the image to be made in a making interface provided by the browser according to the uploaded image to be made and a preset display rule;
a first determination module configured to determine a selected initial image range according to movement information of a range selection control on the image timeline in a case where a movement operation for the range selection control is received;
a second determination module configured to receive a processing operation for the initial image range and determine a target image range according to the processing operation;
a composition module configured to compose a video from images included in the target image range.
12. A computing device, comprising:
a memory and a processor;
the memory is configured to store computer-executable instructions, and the processor is configured to execute the computer-executable instructions to implement the method of:
according to the uploaded image to be made and a preset display rule, establishing a corresponding image time axis for the image to be made in a making interface provided by a browser;
under the condition that the movement operation aiming at the range selection control on the image time axis is received, determining the selected initial image range according to the movement information of the range selection control;
receiving processing operation aiming at the initial image range, and determining a target image range according to the processing operation;
and synthesizing a video according to the images included in the target image range.
13. A computer-readable storage medium storing computer-executable instructions which, when executed by a processor, perform the steps of the video production method of any one of claims 1 to 10.
CN202110350555.9A 2021-03-31 2021-03-31 Video production method and device Pending CN113099288A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110350555.9A CN113099288A (en) 2021-03-31 2021-03-31 Video production method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110350555.9A CN113099288A (en) 2021-03-31 2021-03-31 Video production method and device

Publications (1)

Publication Number Publication Date
CN113099288A true CN113099288A (en) 2021-07-09

Family

ID=76672214

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110350555.9A Pending CN113099288A (en) 2021-03-31 2021-03-31 Video production method and device

Country Status (1)

Country Link
CN (1) CN113099288A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113676751A (en) * 2021-08-19 2021-11-19 上海哔哩哔哩科技有限公司 Video thumbnail processing method and device
CN114915850A (en) * 2022-04-22 2022-08-16 网易(杭州)网络有限公司 Video playing control method and device, electronic equipment and storage medium
CN115086763A (en) * 2022-06-27 2022-09-20 平安银行股份有限公司 Video playing method, device, system and medium based on canvas

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070234214A1 (en) * 2006-03-17 2007-10-04 One True Media, Inc. Web based video editing
CN102638658A (en) * 2012-03-01 2012-08-15 盛乐信息技术(上海)有限公司 Method and system for editing audio-video
CN108965397A (en) * 2018-06-22 2018-12-07 中央电视台 Cloud video editing method and device, editing equipment and storage medium
CN110401878A (en) * 2019-07-08 2019-11-01 天脉聚源(杭州)传媒科技有限公司 A kind of video clipping method, system and storage medium
CN110868631A (en) * 2018-08-28 2020-03-06 腾讯科技(深圳)有限公司 Video editing method, device, terminal and storage medium
CN111163323A (en) * 2019-09-30 2020-05-15 广州市伟为科技有限公司 Online video creation system and method
CN111163358A (en) * 2020-01-07 2020-05-15 广州虎牙科技有限公司 GIF image generation method, device, server and storage medium
CN111612873A (en) * 2020-05-29 2020-09-01 维沃移动通信有限公司 GIF picture generation method and device and electronic equipment
CN112540713A (en) * 2020-11-13 2021-03-23 广州市百果园网络科技有限公司 Video preview progress bar scaling method, system, device and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070234214A1 (en) * 2006-03-17 2007-10-04 One True Media, Inc. Web based video editing
CN102638658A (en) * 2012-03-01 2012-08-15 盛乐信息技术(上海)有限公司 Method and system for editing audio-video
CN108965397A (en) * 2018-06-22 2018-12-07 中央电视台 Cloud video editing method and device, editing equipment and storage medium
CN110868631A (en) * 2018-08-28 2020-03-06 腾讯科技(深圳)有限公司 Video editing method, device, terminal and storage medium
CN110401878A (en) * 2019-07-08 2019-11-01 天脉聚源(杭州)传媒科技有限公司 A kind of video clipping method, system and storage medium
CN111163323A (en) * 2019-09-30 2020-05-15 广州市伟为科技有限公司 Online video creation system and method
CN111163358A (en) * 2020-01-07 2020-05-15 广州虎牙科技有限公司 GIF image generation method, device, server and storage medium
CN111612873A (en) * 2020-05-29 2020-09-01 维沃移动通信有限公司 GIF picture generation method and device and electronic equipment
CN112540713A (en) * 2020-11-13 2021-03-23 广州市百果园网络科技有限公司 Video preview progress bar scaling method, system, device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
此账号已注销: "剪映怎么删除视频片段", 《HTTPS://JINGYAN.BAIDU.COM/ARTICLE/86F4A73E2723EE77D752696F.HTML》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113676751A (en) * 2021-08-19 2021-11-19 上海哔哩哔哩科技有限公司 Video thumbnail processing method and device
CN113676751B (en) * 2021-08-19 2024-03-01 上海哔哩哔哩科技有限公司 Video thumbnail processing method and device
CN114915850A (en) * 2022-04-22 2022-08-16 网易(杭州)网络有限公司 Video playing control method and device, electronic equipment and storage medium
CN114915850B (en) * 2022-04-22 2023-09-12 网易(杭州)网络有限公司 Video playing control method and device, electronic equipment and storage medium
CN115086763A (en) * 2022-06-27 2022-09-20 平安银行股份有限公司 Video playing method, device, system and medium based on canvas

Similar Documents

Publication Publication Date Title
CN113099287A (en) Video production method and device
US11082377B2 (en) Scripted digital media message generation
US12094047B2 (en) Animated emoticon generation method, computer-readable storage medium, and computer device
CN111935504B (en) Video production method, device, equipment and storage medium
CN113099288A (en) Video production method and device
CN104540028B (en) A kind of video beautification interactive experience system based on mobile platform
US10728197B2 (en) Unscripted digital media message generation
KR20230042523A (en) Multimedia data processing method, generation method and related device
CN108924622B (en) Video processing method and device, storage medium and electronic device
CN111935505B (en) Video cover generation method, device, equipment and storage medium
CN112291627A (en) Video editing method and device, mobile terminal and storage medium
CN112801004B (en) Video clip screening method, device, equipment and storage medium
CN110647624A (en) Automatic generation of an animation preview that presents document differences in enterprise messaging
US20200236297A1 (en) Systems and methods for providing personalized videos
CN112887794B (en) Video editing method and device
CN110413185A (en) For specifying link destination and for the interface device and recording medium of viewer
CN106528695A (en) Method for showing video thumbnail through mouse dragging
CN113705156A (en) Character processing method and device
CN113852757B (en) Video processing method, device, equipment and storage medium
CN114693827A (en) Expression generation method and device, computer equipment and storage medium
CN114466222A (en) Video synthesis method and device, electronic equipment and storage medium
CN115988259A (en) Video processing method, device, terminal, medium and program product
CN106021322A (en) Multifunctional image input method
CN113873319A (en) Video processing method and device, electronic equipment and storage medium
CN114025103A (en) Video production method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination