CN109151520B - Method, device, electronic equipment and medium for generating video - Google Patents

Method, device, electronic equipment and medium for generating video Download PDF

Info

Publication number
CN109151520B
CN109151520B CN201811126108.XA CN201811126108A CN109151520B CN 109151520 B CN109151520 B CN 109151520B CN 201811126108 A CN201811126108 A CN 201811126108A CN 109151520 B CN109151520 B CN 109151520B
Authority
CN
China
Prior art keywords
data
video
generating
processed
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811126108.XA
Other languages
Chinese (zh)
Other versions
CN109151520A (en
Inventor
周凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dami Future Technology Co ltd
Original Assignee
Beijing Dami Future Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dami Future Technology Co ltd filed Critical Beijing Dami Future Technology Co ltd
Priority to CN201811126108.XA priority Critical patent/CN109151520B/en
Publication of CN109151520A publication Critical patent/CN109151520A/en
Application granted granted Critical
Publication of CN109151520B publication Critical patent/CN109151520B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8355Generation of protective data, e.g. certificates involving usage data, e.g. number of copies or viewings allowed
    • H04N21/83555Generation of protective data, e.g. certificates involving usage data, e.g. number of copies or viewings allowed using a structured language for describing usage rules of the content, e.g. REL
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25875Management of end-user data involving end-user authentication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Databases & Information Systems (AREA)
  • Computer Graphics (AREA)
  • Information Transfer Between Computers (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application discloses a method, a device, electronic equipment and a medium for generating a video, wherein the method is applied to a Head-less browser. In the application, display data in the generated data are received and extracted in the headless browser, a to-be-processed video is generated according to visual data and audio data in the display data, and the display video is generated based on the to-be-processed video and a preset video frame. By applying the technical scheme, the display video can be generated from the generated data uploaded by the user terminal in the headless browser, and the defects that when a user uploads data information in the course of teaching, the user needs to encode the video data and then uploads the encoded video data, so that the user can consume computing resources and network bandwidth are overcome.

Description

Method, device, electronic equipment and medium for generating video
Technical Field
The present application relates to image processing technologies, and in particular, to a method, an apparatus, an electronic device, and a medium for generating a video.
Background
With the development of society, more and more people can choose to learn various knowledge to expand themselves continuously. In which, the traditional face-to-face teaching of students and teachers requires the two parties to spend a lot of time and energy on the road. Therefore, with the development of the communication era, network lectures are accepted by vast users.
Generally, the online teaching is an interactive training classroom based on the internet remote online. The network teaching adopts the audio and video transmission and data cooperation level network transmission technology, simulates a real classroom environment, and provides an effective training environment for students through a network. Furthermore, the network teaching can not only avoid the problem of extra time and energy consumption caused by off-line teaching, but also watch the playback video during teaching at any time after the teaching is finished so as to consolidate the learned knowledge.
In a general way of recording teaching videos, a synchronous recording teaching process is mostly adopted by a teacher end or a student end in the course of teaching, and the teaching is uploaded to a server after the recording videos are encoded. However, the video generation method adds extra computing resources and bandwidth load to the user computer, which often causes network connection blocking and video disconnection during the course of teaching. Therefore, how to generate a teaching video on the premise of ensuring the fluency of network teaching becomes a problem to be solved by technical personnel in the field.
Disclosure of Invention
One technical problem to be solved by the embodiments of the present application is: on the premise of ensuring the fluency of network teaching, a teaching video is generated.
According to an aspect of the embodiments of the present application, there is provided a method for generating a video, where the method is applied to a Headless browser, and the method includes:
receiving the generated data;
extracting display data in the generated data, and generating a video to be processed according to the display data, wherein the display data comprises visual data and audio data, and the visual data comprises HTML data, picture data, CSS cascading style sheet data and javascript data;
and generating a display video based on the video to be processed and a preset video frame.
Optionally, in another embodiment based on the foregoing method of the present application, after the receiving the generated data, the method further includes:
extracting the visual data in the generated data, wherein the visual data is displayable visual data;
based on the visualization data, image data is generated.
Optionally, in another embodiment based on the foregoing method of the present application, after the receiving the generated data, the method further includes:
extracting all image data in the generated data;
sequencing the image data in sequence according to a preset rule;
acquiring images with preset frame numbers in the image data every preset period;
and generating an image video based on the acquired image.
Optionally, in another embodiment based on the above method of the present application, after the generating an image video based on the acquired image, the method includes:
extracting audio data in the generated data;
and synthesizing the audio data into the image video to generate the video to be processed.
Optionally, in another embodiment based on the foregoing method of the present application, the sequentially sorting the image data according to a predetermined rule includes:
sequencing the image data in sequence according to the source of the image data;
and/or the presence of a gas in the gas,
and sequencing the image data in sequence according to the time sequence.
Optionally, in another embodiment based on the foregoing method of the present application, the generating a display video based on the to-be-processed video and a preset video frame includes:
selecting a corresponding video frame based on the video to be processed;
compressing the video to be processed;
and synthesizing the compressed video to be processed into the video frame to generate a display video.
Optionally, in another embodiment based on the foregoing method of the present application, before the receiving the generated data, the method further includes:
receiving a generation request sent by the target user terminal, wherein the generation request comprises authentication information of the target user terminal and is used for generating the display video;
authenticating the target user terminal according to the authentication information;
and receiving the generated data sent by the target user terminal after the authentication of the target user terminal passes.
Optionally, in another embodiment based on the foregoing method of the present application, after the receiving the generated data, the method further includes:
extracting document data in the generated data;
and synthesizing the document data into the image video to generate the video to be processed.
According to another aspect of the embodiments of the present application, there is provided an apparatus for generating a video, the apparatus being applied to a Headless browser, the apparatus including:
a receiving module for receiving the generated data;
the first generation module is used for extracting display data in the generated data and generating a video to be processed according to the display data, wherein the display data comprises visual data and audio data;
and the second generation module is used for generating a display video based on the video to be processed and a preset video frame.
According to another aspect of the embodiments of the present application, there is provided an electronic device including:
a memory for storing executable instructions; and
a processor in communication with the memory for executing the executable instructions to perform the operations of any of the methods for generating video described above.
According to a further aspect of the embodiments of the present application, there is provided a computer-readable storage medium for storing computer-readable instructions, which when executed, perform the operations of any one of the above methods for generating a video.
In the application, display data in the generated data are received and extracted in the headless browser, a to-be-processed video is generated according to visual data and audio data in the display data, and the display video is generated based on the to-be-processed video and a preset video frame. By applying the technical scheme, the display video can be generated from the generated data uploaded by the user terminal in the headless browser, and the defects that when a user uploads data information in the course of teaching, the user needs to encode the video data and then uploads the encoded video data, so that the user can consume computing resources and network bandwidth are overcome.
The technical solution of the present application is further described in detail by the accompanying drawings and examples.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description, serve to explain the principles of the application.
The present application may be more clearly understood from the following detailed description with reference to the accompanying drawings, in which:
fig. 1 is a flowchart of an embodiment of a method for generating video according to the present application.
Fig. 2 is a flowchart of another embodiment of the method for generating video according to the present application.
Fig. 3 is a schematic structural diagram of an embodiment of generating a video according to the present application.
Fig. 4 is a schematic structural diagram of an apparatus for generating video according to the present application.
Fig. 5 is a schematic structural diagram of an electronic device for generating a video according to the present application.
Detailed Description
Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the application, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
It should be noted that all the directional indications (such as up, down, left, right, front, and rear … …) in the embodiment of the present application are only used to explain the relative position relationship between the components, the movement situation, and the like in a specific posture (as shown in the drawing), and if the specific posture is changed, the directional indication is changed accordingly.
In addition, descriptions in this application as to "first", "second", etc. are for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicit to the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In this application, unless expressly stated or limited otherwise, the terms "connected," "secured," and the like are to be construed broadly, and for example, "secured" may be a fixed connection, a removable connection, or an integral part; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or they may be connected internally or in any other suitable relationship, unless expressly stated otherwise. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
In addition, technical solutions between the various embodiments of the present application may be combined with each other, but it must be based on the realization of the technical solutions by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should be considered to be absent and not within the protection scope of the present application.
A method for performing video generation according to an exemplary embodiment of the present application is described below with reference to fig. 1 to 3. It should be noted that the following application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present application, and the embodiments of the present application are not limited in this respect. Rather, embodiments of the present application may be applied to any scenario where applicable.
Fig. 1 schematically shows a flow chart of a method for generating a video according to an embodiment of the present application. As shown in fig. 1, includes:
a method for generating a video, which is applied to a Headless browser Headless browser, comprises the following steps:
s101, receiving the generated data.
First, a Headless browser (header browser) refers to a browser without a Graphical User Interface (GUI). The automatic test preset by a developer can be completed through the headless browser.
The browser kernel of the Headless browser is not specifically limited, for example, the headset browser kernel of the Headless browser may be Gecko (kernel developed by Firefox browser), Trident (kernel developed by IE browser), or Blink (kernel developed by Google browser). In a preferred embodiment, the Headless browser kernel applied to the method for generating video of the present application may be Webkit (a kernel developed by APPLE browser.)
Optionally, the generated data in the present application may include various data information. For example, image data, audio data, and the like may be included, and further, in addition to the presentation data, the generated data may also carry data information such as document data, a user terminal identifier, and upload time. The specific changes of the stored data information in the generated data do not affect the protection scope of the present application.
In one embodiment of the present application, the received generated data may be generated data transmitted by the receiving server or generated data transmitted by the receiving destination user terminal. The target user terminal can be any party user terminal giving lessons online. For example, the target user terminal may be a terminal of a teacher giving a lecture over the internet, and the target user terminal may be a terminal of any one of a plurality of students giving a lecture over the internet. Further, the target ue may also be a ue that receives and aggregates each generated data sent by a plurality of other ues.
S102, extracting display data in the generated data, and generating a video to be processed according to the display data, wherein the display data comprises visual data and audio data, and the visual data comprises hypertext markup language (HTML) data, picture data, Cascading Style Sheet (CSS) data and transliteration script (java script) data; .
Optionally, in the present application, after the generated data is received, the display data in the generated data is extracted. And generating a video to be processed for generating a display video based on the extracted sound box data.
It should be noted that the presentation data in the present application includes visual data and audio data. The visualization data is displayable visualization data and non-displayable visualization data. Further, the visualized data of the present application may include, but is not limited to, any one or more of the following:
HTML data, picture data, cascading style sheet CSS data and java script data.
In addition, the audio data of the application is background audio data in the course of teaching. Such as background music, lecture sounds, voice prompts, etc.
And S103, generating a display video according to the video to be processed and a preset video frame.
Further optionally, the present application generates a presentation video that can be presented to the user for viewing according to the to-be-processed video generated in S102 and the preset video frame.
It should be noted that there may be a plurality of preset video frames in the present application. And selecting a corresponding video frame according to different generated videos to be processed.
In the application, display data in the generated data are received and extracted in the headless browser, a to-be-processed video is generated according to visual data and audio data in the display data, and the display video is generated based on the to-be-processed video and a preset video frame. By applying the technical scheme, the display video can be generated from the generated data uploaded by the user terminal in the headless browser, and the defects that when a user uploads data information in the course of teaching, the user needs to encode the video data and then uploads the encoded video data, so that the user can consume computing resources and network bandwidth are overcome.
Optionally, in an embodiment of the present application, the manner of generating the display video based on the to-be-processed video and the preset video frame may include the following manner:
and selecting a corresponding video frame based on the video to be processed.
In the application, the corresponding video frame can be selected based on the difference of the videos to be processed. In a preferred embodiment of the present application, the corresponding video frame may be selected based on the difference of the specific content of the video to be processed.
For example, when the video to be processed is a language class teaching video, video frames of different country/region types can be selected according to the difference of language classes in the teaching video. For example, when the language of the lecture video is english, a video frame with countries/regions such as usa, uk, new york, london, etc. as the subject type may be selected. Also, for example, when the language of the lecture video is japanese, a video frame having a country/region such as japan and tokyo as a theme type may be selected.
For another example, when the video to be processed is a science and education class teaching video, video frames of different science and education types can be selected according to different science and education types in the teaching video. If the type of the teaching video is geographical education, a video frame with the geographical theme type can be selected. If the type of the teaching video is political education, a video frame with politics as a theme type can be selected.
In another preferred embodiment of the present application, the corresponding video frame may be selected based on a difference in the teaching object of the video to be processed. For example, when the subject of the video to be processed is an adult, the corresponding video frame may be selected according to the age, sex, nationality, and other factors of the adult. For example, when the object of the video to be processed is a child, the corresponding video frame can be selected according to the age, sex, nationality, and other factors of the child.
And compressing the video to be processed.
Optionally, in the present application, after selecting a corresponding video frame for the to-be-processed video, the to-be-processed video is compressed.
And synthesizing the compressed video to be processed into a video frame to generate a display video.
Further optionally, after the video to be processed is compressed, the compressed video to be processed is synthesized into a corresponding video frame, so as to generate a final display video. And each teaching object can review the display video in a downloading or online playing mode.
Further, after S101 (receiving the generated data) in the present application, a specific embodiment is also included. In particular, the present application also includes a method of generating a video, as shown in figure 2,
s201, receiving the generated data.
S202, visual data in the generated data are extracted, the visual data are displayable visual data, and image data are generated based on the visual data.
Optionally, after receiving the generated data, all the displayable visualization data in the generated data may be extracted, and all the displayable visualization data may be converted into image data.
And S203, sequencing the image data according to a preset rule.
Optionally, in the present application, the manner of sequentially ordering the image data according to the predetermined rule may include, but is not limited to, any one or more of the following three manners:
the first mode is as follows:
the image data is sequentially ordered according to the source of the image data.
Optionally, in the present application, the image data may be sequentially sorted according to the source thereof. For example, the present application may mark the image data transmitted from the teacher-side terminal of the present lecture as high according to the importance level, and mark the image data transmitted from the student-side terminal of the present lecture as low according to the importance level. Further, the image data are sequentially ordered according to the importance of the image data source.
The second mode is as follows:
and sequencing the image data in sequence according to the time sequence.
Optionally, in the present application, the image data may also be sequentially sorted according to the time sequence of the generation of the image data.
And S204, acquiring images with preset frame numbers in the image data every preset period.
Further optionally, the present application may acquire a preset number of frames of images from all image data in a time period of every predetermined period.
It should be noted that the predetermined period is not specifically limited in the present application. That is, the predetermined period may be every second, and the predetermined period may also be every millisecond. The specific variation of the predetermined period does not affect the protection scope of the present application.
It should also be noted that the preset frame number is not specifically limited in this application. That is, the preset frame number may be 30 frames, and the preset frame number may also be 50 frames. The specific change of the preset frame number does not affect the protection scope of the present application.
In a preferred embodiment of the present application, 30 frames of images in the image data may be acquired every 1 ms.
And S205, generating an image video based on the acquired image.
Optionally, after the acquisition of all the images is completed, a corresponding image video is generated according to all the acquired images. The manner of generating the video from all the images is the same as that in the prior art, and is not described herein again.
S206, extracting the audio data in the generated data.
Further optionally, after the image video is generated, all audio data in the generated data may be extracted.
And S207, synthesizing the audio data into the image video to generate a video to be processed.
Optionally, after the audio data is extracted, the audio data may be synthesized into the generated image video, so as to generate a video to be processed with both audio and video. Similarly, the manner of synthesizing the audio data into the image video is the same as that of the prior art, and is not described herein again.
Further optionally, the document data in the generated data may be extracted after the image video is generated.
And synthesizing the document data into the image video to generate a video to be processed.
Similarly, after the audio data is synthesized into the image video, the document data in the generated data can be extracted. And synthesizing the document data into the generated image video, thereby generating a video to be processed including the document data. Similarly, the manner of synthesizing the document data into the image video is the same as that of the prior art, and will not be described herein again.
And S208, generating a display video based on the video to be processed and a preset video frame.
Further optionally, in yet another embodiment of the present application, before S101 (receiving the generation data), a method for generating a video is further included, as shown in fig. 3,
s301, receiving a generation request sent by a target user terminal, wherein the generation request comprises authentication information of the target user terminal and is used for generating a display video.
Optionally, the generation request sent by the target user terminal is used to request that the display video is generated according to the generated data after the generated data is received. Further, after receiving the generated data, the server is protected from being attacked by a malicious user or used by an unrelated user. In this application, the identity authentication of the target user terminal is also required. Furthermore, the terminal is verified according to the authentication information sent by the target user terminal.
S302, the target user terminal is authenticated according to the authentication information.
S303, after the authentication of the target user terminal is passed, the generated data is received.
Further, after receiving the authentication information sent by the target user terminal, the authentication information may be verified by using the verification information obtained in advance. It should be noted that the present application does not specifically limit the authentication information. That is, the authentication information may be a pre-generated key or a device number of the terminal device. The specific content of the authentication information does not change and does not affect the protection scope of the present application.
Further, taking the authentication information as the device number of the target terminal as an example, after receiving the device number sent by the terminal, traversing the pre-stored authorized device list, and when the device number does not exist in the authorized device list, determining that the identity authentication of the target terminal fails. And sends a notification of the identity authentication failure to the target terminal. And when the equipment number exists in the authorized equipment list, judging that the authentication of the target user terminal is passed. And further determining that the target user terminal is a user having authority to transfer the generated data.
And S304, extracting the display data in the generated data, and generating the video to be processed according to the display data.
S305, generating a display video based on the video to be processed and a preset video frame.
In another embodiment of the present application, as shown in fig. 4, the present application further provides an apparatus for generating a video, which includes a receiving module 401, a first generating module 402, and a second generating module 403. Wherein the content of the first and second substances,
a receiving module 401, configured to receive generated data;
a first generating module 402, configured to extract display data in the generated data, and generate a to-be-processed video according to the display data, where the display data includes visual data and audio data;
a second generating module 403, configured to generate a display video based on the to-be-processed video and a preset video frame.
In the application, display data in the generated data are received and extracted in the headless browser, a to-be-processed video is generated according to visual data and audio data in the display data, and the display video is generated based on the to-be-processed video and a preset video frame. By applying the technical scheme, the display video can be generated from the generated data uploaded by the user terminal in the headless browser, and the defects that when a user uploads data information in the course of teaching, the user needs to encode the video data and then uploads the encoded video data, so that the user can consume computing resources and network bandwidth are overcome.
In another embodiment of the present application, the method further comprises: an extraction module 404, a sorting module 405, an acquisition module 406, and a third generation module 407, wherein:
the extracting module 404 is configured to extract visual data in the generated data, where the visual data is displayable visual data.
A sorting module 405, configured to sequentially sort the image data according to a predetermined rule.
And the acquisition module 406 is configured to acquire images with preset frame numbers in the image data every predetermined period.
A third generating module 407, configured to generate an image video based on the acquired image.
Wherein, the sequencing module further comprises:
sequencing the image data in sequence according to the source of the image data;
and/or the presence of a gas in the gas,
and sequencing the image data in sequence according to the time sequence.
In another embodiment of the present application, the method further comprises: the extraction module 404 is further configured to:
the extracting module 404 is further configured to extract audio data in the generated data.
The third generating module 407 is further configured to synthesize the audio data into the image video, and generate the to-be-processed video.
In another embodiment of the present application, the second generating module 403 further includes:
the second generating module 403 is further configured to select a corresponding video frame based on the to-be-processed video.
The second generating module 403 is further configured to perform compression processing on the video to be processed;
the second generating module 403 is further configured to synthesize the compressed video to be processed into the video frame, so as to generate a display video.
In another embodiment of the present application, the method further comprises: the first receiving module 408, the authentication module 409 and the second receiving module 410 comprise:
a first receiving module 408, configured to receive a generation request sent by the target user terminal, where the generation request includes authentication information of the target user terminal, and the generation request is used to generate the display video.
And the authentication module 409 is configured to authenticate the target user terminal according to the authentication information.
A second receiving module 410, configured to receive the generated data after the authentication of the target user terminal is passed.
In another embodiment of the present application, the extracting module 404 and the third generating module 407 further include:
the extracting module 404 is further configured to extract document data in the generated data.
The third generating module 407 is further configured to synthesize the document data into the image video, and generate the video to be processed.
Having described the method of generating a video and the apparatus for generating a video according to the exemplary embodiment of the present application, an electronic device according to the exemplary embodiment of the present application for implementing the steps described in the above-described method embodiment will be described with reference to fig. 5. The computer system/server 50 shown in fig. 5 is only an example and should not bring any limitations to the functionality or scope of use of the embodiments of the present application.
As shown in fig. 5, computer system/server 50 is in the form of a general purpose computing device. The components of computer system/server 50 may include, but are not limited to: one or more processors or processing units 501, a system memory 502, and a bus 503 that couples the various system components (including the system memory 502 and the processing unit 501).
Computer system/server 50 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server 50 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 502 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)5021 and/or cache memory 5022. The computer system/server 50 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, the ROM 5023 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, which is commonly referred to as a "hard drive"). Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to the bus 503 by one or more data media interfaces. At least one program product may be included in system memory 502 having a set (e.g., at least one) of program modules configured to carry out the functions of embodiments of the application.
A program/utility 5025 having a set (at least one) of program modules 5024 may be stored in, for example, system memory 502, and such program modules 5024 include, but are not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment. The program modules 5024 generally perform the functions and/or methodologies of the embodiments described herein.
The computer system/server 50 may also communicate with one or more external devices 504 (e.g., keyboard, pointing device, display, etc.). Such communication may be through input/output (I/O) interfaces 505. Also, the computer system/server 50 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) via a network adapter 505. As shown in FIG. 5, network adapter 505 communicates with other modules of computer system/server 50, such as processing unit 501, via bus 503. It should be appreciated that although not shown in FIG. 5, other hardware and/or software modules may be used in conjunction with computer system/server 50.
The processing unit 501 executes various functional applications and data processing, for example, instructions for implementing the steps in the above-described method embodiments, by executing a computer program stored in the system memory 502; in particular, the processing unit 501 may execute a computer program stored in the system memory 502, and when the computer program is executed, the following instructions are executed:
receiving the generated data;
extracting display data in the generated data, and generating a video to be processed according to the display data, wherein the display data comprises visual data and audio data;
and generating a display video based on the video to be processed and a preset video frame.
Of course, other instructions included in the electronic device, such as the content described in the device side method side, are not described in detail herein.
In the application, the transfer server receives the file to be stored which is sent by the target terminal and carries the first identifier, and generates the file to be uploaded after the file to be stored is processed in a preset mode. And then, according to the first identification, transmitting the generated video to a cloud server to be uploaded. By applying the technical scheme, the uploaded files of the user terminals can be received and uploaded to the corresponding cloud servers respectively, and the problem that time is consumed due to unfamiliarity with the use rules of the cloud servers when the user uploads the data information can be solved.
Embodiments of the present application also provide a computer-readable storage medium for storing computer-readable instructions, which, when executed, perform the operations of the method for generating a video, which are included in fig. 1 to 3. The content of the description of the method for generating the video is not repeated herein.
The above are only some optional embodiments of the present application, and not limiting the scope of the present application, and all modifications made by the equivalent structural changes made in the content of the specification and the drawings, or applied directly/indirectly to other related technical fields, which are within the spirit of the present application, are included in the scope of the present application.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The methods and apparatus of the present application may be implemented in a number of ways. For example, the methods and apparatus of the present application may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present application are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present application may also be embodied as a program recorded in a recording medium, the program including machine-readable instructions for implementing a method according to the present application. Thus, the present application also covers a recording medium storing a program for executing the method according to the present application.
The description of the present application has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the application in the form disclosed. Many modifications and variations will be apparent to practitioners skilled in this art. The embodiment was chosen and described in order to best explain the principles of the application and the practical application, and to enable others of ordinary skill in the art to understand the application for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (9)

1. A method for generating a video, wherein the method is applied to a headless browser Headlessbrowser and comprises the following steps:
receiving the generated data;
extracting display data in the generated data, and generating a video to be processed according to the display data, wherein the display data comprises visual data and audio data, and the visual data at least comprises one of the following data: HTML data of hypertext markup language, image data, CSS data of cascading style sheet, and javascript data of transliteration script;
generating a display video based on the video to be processed and a preset video frame;
wherein, the generating of the display video based on the video to be processed and the preset video frame comprises:
selecting a corresponding video frame based on the video to be processed;
compressing the video to be processed;
synthesizing the compressed video to be processed into the video frame to generate a display video;
wherein, before the receiving the generated data, further comprising:
receiving a generation request sent by a target user terminal, wherein the generation request comprises authentication information of the target user terminal and is used for generating the display video;
authenticating the target user terminal according to the authentication information;
and receiving the generated data after the authentication of the target user terminal is passed.
2. The method of claim 1, after said receiving generation data, further comprising:
extracting the visual data in the generated data, wherein the visual data is displayable visual data;
generating the image data based on the visualization data.
3. The method of claim 2, further comprising, after the generating the image data based on the visualization data:
sequencing the image data in sequence according to a preset rule;
acquiring images with preset frame numbers in the image data every preset period;
and generating an image video based on the acquired image.
4. The method of claim 3, further comprising, after said generating an image video based on said captured image:
extracting audio data in the generated data;
and synthesizing the audio data into the image video to generate the video to be processed.
5. The method of claim 3, wherein said sequentially ordering said image data according to a predetermined rule comprises:
sequencing the image data in sequence according to the source of the image data;
and/or the presence of a gas in the gas,
and sequencing the image data in sequence according to the time sequence.
6. The method of any of claims 3-5, further comprising, after the receiving the generated data:
extracting document data in the generated data;
and synthesizing the document data into the image video to generate the video to be processed.
7. An apparatus for generating a video, wherein the apparatus is applied to a headless browser, a headset browser, and comprises:
a receiving module for receiving the generated data;
the first generation module is used for extracting display data in the generated data and generating a video to be processed according to the display data, wherein the display data comprises visual data and audio data;
the second generation module is used for generating a display video based on the video to be processed and a preset video frame;
wherein the second generating module comprises:
the selection unit is used for selecting a corresponding video frame based on the video to be processed;
the compression unit is used for compressing the video to be processed;
the generating unit is used for synthesizing the compressed video to be processed into the video frame to generate a display video;
wherein the apparatus further comprises:
the first processing module is used for receiving a generation request sent by a target user terminal, wherein the generation request comprises authentication information of the target user terminal, and the generation request is used for generating the display video;
the authentication module is used for authenticating the target user terminal according to the authentication information;
and the second processing module is used for receiving the generated data after the authentication of the target user terminal is passed.
8. An electronic device, comprising:
a memory for storing executable instructions; and the number of the first and second groups,
a processor in communication with the memory to execute the executable instructions to implement the method of generating video of any of claims 1-6.
9. A computer-readable storage medium storing computer-readable instructions that, when executed, implement the method of generating video of any of claims 1-6.
CN201811126108.XA 2018-09-26 2018-09-26 Method, device, electronic equipment and medium for generating video Active CN109151520B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811126108.XA CN109151520B (en) 2018-09-26 2018-09-26 Method, device, electronic equipment and medium for generating video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811126108.XA CN109151520B (en) 2018-09-26 2018-09-26 Method, device, electronic equipment and medium for generating video

Publications (2)

Publication Number Publication Date
CN109151520A CN109151520A (en) 2019-01-04
CN109151520B true CN109151520B (en) 2021-09-07

Family

ID=64812785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811126108.XA Active CN109151520B (en) 2018-09-26 2018-09-26 Method, device, electronic equipment and medium for generating video

Country Status (1)

Country Link
CN (1) CN109151520B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109640023B (en) * 2019-01-31 2021-06-18 北京字节跳动网络技术有限公司 Video recording method, device, server and storage medium
CN113573102A (en) * 2021-08-18 2021-10-29 北京中网易企秀科技有限公司 Video generation method and device
CN113938619A (en) * 2021-10-28 2022-01-14 稿定(厦门)科技有限公司 Video synthesis method, system and storage device based on browser

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1921612A (en) * 2005-08-26 2007-02-28 萧学文 Method and system for automatic video production
CN102801942A (en) * 2012-07-23 2012-11-28 北京小米科技有限责任公司 Method and device for recording video and generating GIF (Graphic Interchange Format) dynamic graph
CN105791950A (en) * 2014-12-24 2016-07-20 珠海金山办公软件有限公司 Power Point video recording method and device
CN106331749A (en) * 2016-08-31 2017-01-11 北京云图微动科技有限公司 Video request method and system
CN107786582A (en) * 2016-08-24 2018-03-09 腾讯科技(深圳)有限公司 A kind of online teaching methods, apparatus and system
CN108495174A (en) * 2018-04-09 2018-09-04 深圳格莱珉文化传播有限公司 A kind of H5 pages effect generates the method and system of video file

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1921612A (en) * 2005-08-26 2007-02-28 萧学文 Method and system for automatic video production
CN102801942A (en) * 2012-07-23 2012-11-28 北京小米科技有限责任公司 Method and device for recording video and generating GIF (Graphic Interchange Format) dynamic graph
CN105791950A (en) * 2014-12-24 2016-07-20 珠海金山办公软件有限公司 Power Point video recording method and device
CN107786582A (en) * 2016-08-24 2018-03-09 腾讯科技(深圳)有限公司 A kind of online teaching methods, apparatus and system
CN106331749A (en) * 2016-08-31 2017-01-11 北京云图微动科技有限公司 Video request method and system
CN108495174A (en) * 2018-04-09 2018-09-04 深圳格莱珉文化传播有限公司 A kind of H5 pages effect generates the method and system of video file

Also Published As

Publication number Publication date
CN109151520A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
CN109803180B (en) Video preview generation method and device, computer equipment and storage medium
CN109151520B (en) Method, device, electronic equipment and medium for generating video
CN107066619B (en) User note generation method and device based on multimedia resources and terminal
US9898750B2 (en) Platform for distribution of content to user application programs and analysis of corresponding user response data
WO2020042376A1 (en) Method and apparatus for outputting information
CN106303303A (en) Method and device for translating subtitles of media file and electronic equipment
KR20210001412A (en) System and method for providing learning service
CN113129186A (en) Education platform interactive system based on Internet of things
Vichyaloetsiri et al. Web service framework to translate text into sign language
CN108847066A (en) A kind of content of courses reminding method, device, server and storage medium
CN108696713B (en) Code stream safety test method, device and test equipment
KR101445922B1 (en) Reproduction method of knowledge through social network
CN104202425A (en) Real-time online data transmission system and remote course data transmission method
CN117390277A (en) Course resource and service management method and system
US9947368B2 (en) Information processing apparatus, information processing method, and non-transitory computer readable medium
CN113554904B (en) Intelligent processing method and system for multi-mode collaborative education
CN112115703B (en) Article evaluation method and device
CN113704656A (en) Webpage display method, device, equipment and storage medium
CN109559313B (en) Image processing method, medium, device and computing equipment
JP6981016B2 (en) Information processing equipment and information processing method
JP2016173395A (en) Response support program, response support system, and response support method
CN111158822A (en) Display interface control method and device, storage medium and electronic equipment
KR20160096431A (en) Education content providing system
CN112100281B (en) Room scene reproduction method and device and electronic equipment
Harrison The use of digital technology in the class and laboratory

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant