CN112188232A - Video generation method, video display method and device - Google Patents

Video generation method, video display method and device Download PDF

Info

Publication number
CN112188232A
CN112188232A CN202011054276.XA CN202011054276A CN112188232A CN 112188232 A CN112188232 A CN 112188232A CN 202011054276 A CN202011054276 A CN 202011054276A CN 112188232 A CN112188232 A CN 112188232A
Authority
CN
China
Prior art keywords
video
information
explanation
subject
problem solving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011054276.XA
Other languages
Chinese (zh)
Inventor
王紫静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202011054276.XA priority Critical patent/CN112188232A/en
Publication of CN112188232A publication Critical patent/CN112188232A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • H04N21/2335Processing of audio elementary streams involving reformatting operations of audio signals, e.g. by converting from one coding standard to another
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • H04N21/2355Processing of additional data, e.g. scrambling of additional data or processing content descriptors involving reformatting operations of additional data, e.g. HTML pages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • H04N21/4355Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/437Interfacing the upstream path of the transmission network, e.g. for transmitting client requests to a VOD server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4398Processing of audio elementary streams involving reformatting operations of audio signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4882Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders

Abstract

The present disclosure provides a video generation method, including: acquiring question information of a question to be solved; generating at least one problem solving step of the to-be-solved object and parameter analysis information corresponding to each problem solving step of the to-be-solved object based on the problem information of the to-be-solved object and a pre-trained problem solving model; and generating an explanation video corresponding to the to-be-solved subject based on the parameter analysis information corresponding to each problem solving step of the to-be-solved subject.

Description

Video generation method, video display method and device
Technical Field
The disclosure relates to the technical field of computers, in particular to a method and a device for video generation and video display.
Background
In recent years, students' education problems are receiving more and more attention from various aspects, but due to various considerations such as human cost, students often do not have specially-assigned persons to explain at any time when answering questions, so that the students need to solve practice problems that the students do not meet by means of an application program.
In the related art, when solving a practice problem, generally, the question confidence of the practice problem is extracted firstly, then the extracted question information is matched with the question information stored in the database, and a text answer corresponding to the question information successfully matched with the extracted question information in the database is used as an answer of the practice problem.
Disclosure of Invention
The embodiment of the disclosure at least provides a video generation method, a video display method and a video display device.
In a first aspect, an embodiment of the present disclosure provides a video generation method, including:
acquiring question information of a question to be solved;
generating at least one problem solving step of the to-be-solved object and parameter analysis information corresponding to each problem solving step of the to-be-solved object based on the problem information of the to-be-solved object and a pre-trained problem solving model;
and generating an explanation video corresponding to the to-be-solved subject based on the parameter analysis information corresponding to each problem solving step of the to-be-solved subject.
In a possible implementation manner, the generating an explanation video corresponding to the subject to be solved based on parameter analysis information corresponding to each problem solving step of the subject to be solved includes:
determining an animation display template matched with the to-be-solved problem;
adding the parameter analysis information to the animation display template based on the solving sequence corresponding to the solving step of the object to be solved; determining audio information corresponding to the parameter analysis information;
and fusing the audio information and the animation display template added with the parameter analysis information to obtain an explanation video corresponding to the subject to be solved.
In a possible embodiment, the determining an animation display template matching the subject to be solved includes:
and determining the display attribute information of each display module in the animation display template based on at least one problem solving step of the to-be-solved problem, wherein the display attribute information comprises a display area and/or a display number.
In a possible implementation manner, the fusing the audio information and the animation display template to which the parameter analysis information is added to obtain an explanation video corresponding to the subject to be solved includes:
determining the display duration of a display module corresponding to each section of audio information in the animation display template based on the playing duration of the audio information;
and fusing the audio information and the animation display template based on the display duration of each display module to obtain an explanation video corresponding to the to-be-solved subject.
In a possible implementation manner, the generating an explanation video corresponding to the subject to be solved based on parameter analysis information corresponding to each problem solving step of the subject to be solved includes:
determining at least one section of interactive sub-video based on parameter analysis information corresponding to each problem solving step of the object to be solved, problem solving sequences corresponding to the problem solving steps and a preset interactive animation display template; and the number of the first and second groups,
and generating each explanation sub-video corresponding to the to-be-solved question based on the parameter analysis information corresponding to each problem solving step of the to-be-solved question, the problem solving sequence corresponding to each problem solving step and a preset explanation template, wherein the interactive sub-video and the explanation sub-video form the explanation video.
In a possible implementation, the parameter resolution information includes multiple information of the following information:
the method comprises the following steps of calculation formula, unit information, calculation meaning of the calculation formula, meaning of each calculation parameter in the calculation formula and calculation type corresponding to the calculation formula.
In a possible embodiment, the method further comprises:
generating analysis content corresponding to the to-be-solved item based on the parameter analysis information corresponding to each problem solving step and a preset analysis template; the parsing template contains a formalized language matched with the logic sequence and parameter parsing information of different problem solving steps.
In a second aspect, an embodiment of the present disclosure further provides a video display method, including:
responding to a preset trigger operation aiming at the to-be-solved question, and initiating an explanation request; wherein, the explanation request carries the subject information of the subject to be solved;
receiving an explanation video corresponding to the to-be-solved subject;
displaying the explanation video through a user side; the explaining video comprises at least one solving step of the to-be-solved question and parameter analysis information corresponding to each solving step of the to-be-solved question.
In one possible embodiment, the explanation video comprises an interactive sub-video and an explanation sub-video;
the method further comprises the following steps:
responding to a trigger operation aiming at any interactive sub-video in the explanation videos, and displaying the explanation sub-video or the interactive sub-video corresponding to the trigger operation.
In a third aspect, an embodiment of the present disclosure further provides a video generating apparatus, including:
the acquisition module is used for acquiring the question information of the questions to be solved;
the first generation module is used for generating at least one problem solving step of the to-be-solved problem and parameter analysis information corresponding to each problem solving step of the to-be-solved problem based on the problem information of the to-be-solved problem and a pre-trained problem solving model;
and the second generation module is used for generating an explanation video corresponding to the to-be-solved question based on the parameter analysis information corresponding to each problem solving step of the to-be-solved question.
In a possible implementation manner, when generating an explanation video corresponding to the subject to be solved based on the parameter analysis information corresponding to each subject solving step of the subject to be solved, the second generating module is configured to:
determining an animation display template matched with the to-be-solved problem;
adding the parameter analysis information to the animation display template based on the solving sequence corresponding to the solving step of the object to be solved; determining audio information corresponding to the parameter analysis information;
and fusing the audio information and the animation display template added with the parameter analysis information to obtain an explanation video corresponding to the subject to be solved.
In one possible embodiment, the second generating module, when determining the animation demonstration template matching the subject to be solved, is configured to:
and determining the display attribute information of each display module in the animation display template based on at least one problem solving step of the to-be-solved problem, wherein the display attribute information comprises a display area and/or a display number.
In a possible implementation manner, the second generating module, when the audio information is fused with the animation display template to which the parameter parsing information is added, to obtain the explanation video corresponding to the subject to be solved, is configured to:
determining the display duration of a display module corresponding to each section of audio information in the animation display template based on the playing duration of the audio information;
and fusing the audio information and the animation display template based on the display duration of each display module to obtain an explanation video corresponding to the to-be-solved subject.
In a possible implementation manner, when generating an explanation video corresponding to the subject to be solved based on the parameter analysis information corresponding to each subject solving step of the subject to be solved, the second generating module is configured to:
determining at least one section of interactive sub-video based on parameter analysis information corresponding to each problem solving step of the object to be solved, problem solving sequences corresponding to the problem solving steps and a preset interactive animation display template; and the number of the first and second groups,
and generating each explanation sub-video corresponding to the to-be-solved question based on the parameter analysis information corresponding to each problem solving step of the to-be-solved question, the problem solving sequence corresponding to each problem solving step and a preset explanation template, wherein the interactive sub-video and the explanation sub-video form the explanation video.
In a possible implementation, the parameter resolution information includes multiple information of the following information:
the method comprises the following steps of calculation formula, unit information, calculation meaning of the calculation formula, meaning of each calculation parameter in the calculation formula and calculation type corresponding to the calculation formula.
In a possible implementation manner, the second generating module is further configured to:
generating analysis content corresponding to the to-be-solved item based on the parameter analysis information corresponding to each problem solving step and a preset analysis template; the parsing template contains a formalized language matched with the logic sequence and parameter parsing information of different problem solving steps.
In a fourth aspect, an embodiment of the present disclosure further provides a video display apparatus, including:
the response module is used for responding to preset trigger operation aiming at the to-be-solved question and initiating an explanation request; wherein, the explanation request carries the subject information of the subject to be solved;
the receiving module is used for receiving the explanation video corresponding to the to-be-solved subject;
the display module is used for displaying the explanation video through a user side; the explaining video comprises at least one solving step of the to-be-solved question and parameter analysis information corresponding to each solving step of the to-be-solved question.
In one possible embodiment, the explanation video comprises an interactive sub-video and an explanation sub-video;
the display module is further configured to:
responding to a trigger operation aiming at any interactive sub-video in the explanation videos, and displaying the explanation sub-video or the interactive sub-video corresponding to the trigger operation.
In a fifth aspect, an embodiment of the present disclosure further provides a computer device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect, or any one of the possible implementations of the first aspect, or the second aspect, or any one of the possible implementations of the second aspect.
In a sixth aspect, this disclosed embodiment also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor performs the steps in the first aspect, or any one of the possible embodiments of the first aspect, or performs the steps in the second aspect, or any one of the possible embodiments of the second aspect.
According to the method and the device for generating the video and displaying the video, at least one problem solving step of the problem to be solved and parameter analysis information corresponding to each problem solving step of the problem to be solved can be determined firstly based on a pre-trained problem solving model, and then the explanation video corresponding to the problem to be solved can be automatically generated based on the parameter analysis information.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flow chart of a video generation method provided by an embodiment of the present disclosure;
fig. 2 shows a flowchart of an explanation video generation method provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an animation display template provided by an embodiment of the disclosure;
FIG. 4 is a schematic diagram illustrating a presentation page of an interactive sub-video provided by an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of an illustrative video display provided by an embodiment of the disclosure;
fig. 6 is a schematic flow chart illustrating a video presentation method provided by an embodiment of the present disclosure;
fig. 7 shows an architecture diagram of a video generation apparatus provided by an embodiment of the present disclosure;
fig. 8 is a schematic diagram illustrating an architecture of a video display apparatus provided in an embodiment of the present disclosure;
fig. 9 shows a schematic structural diagram of a computer device 900 provided by an embodiment of the present disclosure;
fig. 10 shows a schematic structural diagram of a computer device 1000 provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
In the related art, when solving a practice problem, generally, the question confidence of the practice problem is extracted firstly, then the extracted question information is matched with the question information stored in the database, and a text answer corresponding to the question information successfully matched with the extracted question information in the database is used as an answer of the practice problem.
Based on the research, the present disclosure provides a method and an apparatus for video generation and video display, which may determine at least one problem solving step of a problem to be solved and parameter parsing information corresponding to each problem solving step of the problem to be solved, and automatically generate an explanation video corresponding to the problem to be solved based on the parameter parsing information.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the present embodiment, first, a video generating and video displaying method disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the video generating and video displaying method provided in the embodiments of the present disclosure is generally a computer device with certain computing capability, and the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or a server or other processing device.
Referring to fig. 1, a flowchart of a video generation method provided in an embodiment of the present disclosure is shown, where the method includes steps 101 to 103, where:
step 101, obtaining topic information of a topic to be solved.
102, generating at least one problem solving step of the to-be-solved problem and parameter analysis information corresponding to each problem solving step of the to-be-solved problem based on the problem information of the to-be-solved problem and a pre-trained problem solving model.
And 103, generating an explanation video corresponding to the to-be-solved subject based on the parameter analysis information corresponding to each problem solving step of the to-be-solved subject.
The following is a detailed description of the above steps 101 to 103.
For step 101,
In a possible implementation manner, if the execution main body for obtaining the topic information of the topic to be solved is the server, when obtaining the topic information of the topic to be solved, a generation instruction carrying the topic information of the topic to be solved sent by the client may be received, and the topic information of the topic to be solved is obtained based on the generation instruction.
In practical application, the manner of obtaining the topic information of the topic to be solved may be any one of the following manners:
the method comprises the steps of firstly, obtaining the question information of the to-be-solved question input by a user at a preset input position.
In a possible implementation manner, the client may be preset with an input location area, and the user may input the title information of the topic to be solved in the input location area by means of typing or copying and pasting.
If the execution main body for acquiring the title information input by the user at the preset input position is the client, the client can directly acquire the title information of the to-be-solved title after the user inputs the title information at the input position region; if the execution main body for acquiring the topic information of the to-be-solved question input by the user at the preset input position is the server, the client sends the topic information of the to-be-solved question to the server after the user inputs the topic information of the to-be-solved question at the input position region, so that the server acquires the topic information of the to-be-solved question.
And secondly, acquiring media contents such as images, videos and audios containing the topic information of the to-be-solved topic, and extracting information in the media contents to obtain the topic information of the to-be-solved topic.
If the execution main body for acquiring the image containing the topic information of the topic to be solved is a client, acquiring the image containing the topic information of the topic to be solved through a camera carried by terminal equipment, and acquiring the topic information of the topic to be solved contained in the image through Optical Character Recognition (OCR) and other Recognition technologies;
if the execution main body for acquiring the image containing the topic information of the to-be-solved question is a server, acquiring the image containing the topic information of the to-be-solved question through a camera carried by a client-side acquisition terminal device, determining the topic information of the to-be-solved question contained in the image through recognition technologies such as OCR (optical character recognition), and finally sending the topic information of the to-be-solved question to the server; alternatively, the first and second electrodes may be,
the server can directly acquire an image which is sent by the client and contains the topic information of the to-be-solved question, the image can be acquired by calling a camera of the electronic equipment deployed by the client, and then the topic information of the to-be-solved question is obtained through recognition technologies such as OCR (optical character recognition).
When the media content containing the topic information of the to-be-solved question is a video, the video can be subjected to frame extraction, so that the topic information containing the to-be-solved question is determined, and then the topic information of the to-be-solved question is obtained through recognition technologies such as OCR (optical character recognition); when the media content containing the topic information of the to-be-solved question is audio, the audio can be converted into text content containing the topic information of the to-be-solved question through a voice recognition technology, and therefore the topic information of the to-be-solved question is obtained.
With respect to step 102,
Wherein the parameter analysis information includes a plurality of information among:
the method comprises the following steps of calculation formula, unit information, calculation meaning of the calculation formula, meaning of each calculation parameter in the calculation formula and calculation type corresponding to the calculation formula.
The calculation type corresponding to the calculation formula may be a type used to describe a preset calculation template adopted by the calculation formula, for example, the calculation type may include "factor, product", "addend, and", "find the least common multiple", "find the greatest common factor", "average, number, total", and the like.
Illustratively, the topic information shown below is input to the neural network model:
fruit stores have 120 kg of pears, apple mass is 3/4 for pears, orange mass is 2/3 for the sum of pear and apple mass, how many kg are oranges?
After the neural network model processes the topic information, output contents shown in table 1 can be generated:
TABLE 1
Figure BDA0002710458510000101
Figure BDA0002710458510000111
In a possible implementation manner, when at least one problem solving step of the problem to be solved and the parameter analysis information corresponding to each problem solving step of the problem to be solved are generated based on the problem information of the problem to be solved and the pre-trained problem solving model, the problem information of the problem to be solved can be input into the pre-trained problem solving model, and at least one problem solving step and the parameter analysis information corresponding to each problem solving step are output.
The neural network model may be a Natural Language Processing (NLP) model, and the neural network model may be obtained by training based on a sample topic carrying analysis labeling information.
For step 103,
In a possible implementation manner, when the explanation video corresponding to the subject to be solved is generated based on the parameter analysis information corresponding to each problem solving step of the subject to be solved, the explanation video corresponding to the subject to be solved can also be generated based on the subject information of the subject to be solved and the parameter analysis information corresponding to each problem solving step of the subject to be solved.
In a possible implementation manner, when generating an explanation video corresponding to a subject to be solved based on parameter analysis information corresponding to each problem solving step of the subject to be solved, reference may be made to the method shown in fig. 2, which includes the following steps:
step 201, determining an animation display template matched with the to-be-solved problem.
In a possible implementation manner, when the animation display template matched with the to-be-solved subject is determined, the display attribute information of each display module in the animation display template may be determined based on at least one problem solving step of the to-be-solved subject, wherein the display attribute information may include a display area and/or a display number.
For example, an animation display template may be as shown in fig. 3, a first position region of the animation display template may be used to display title information of a title to be solved, and when an animation display template matching the title to be solved is determined, a display area of the first position region may be determined; the second position area of the animation display template can be used for displaying the logic derivation of the solution problem corresponding to the solution problem step, and when the animation display template matched with the solution problem is determined, the display number of the display modules in the second position area can be determined; and the third position area of the animation display template is used for displaying detailed solutions of the problem solving step, and when the animation display template matched with the to-be-solved problem is determined, the display area of the third position area can be determined.
Step 202, adding the parameter analysis information to the animation display template based on the solving sequence corresponding to the solving step of the to-be-solved question; and determining audio information corresponding to the parameter parsing information.
In a possible implementation manner, when determining the audio information corresponding to the parameter analysis information, the audio information corresponding to the topic information may be generated based on the topic information of the topic to be solved; and then generating audio information corresponding to the problem solving step based on the formalized audio information matched with the logic sequence and the parameter analysis information of different problem solving steps and the parameter analysis information of the problem solving step.
Here, the audio information corresponding to the problem solving step may include first audio information derived by the problem solving logic, and second audio information detailed by the problem solving step, i.e., audio information corresponding to the second location area and audio information corresponding to the third location area in fig. 3.
In one possible embodiment, the formalized audio information for generating the first audio information corresponding to different topics to be solved may be the same, and may for example include "request XXX, require XXX" corresponding audio information; the second audio information includes audio information of analysis content corresponding to different problem solving steps, and formalized audio information used for generating the audio information of the analysis content corresponding to different problem solving steps may be different.
Specifically, the formal audio information of the analysis content corresponding to different problem solving steps in the second audio information may be related to the calculation type in the parameter analysis information of the problem solving steps, for example, when the calculation type is "addend, and", the formal audio information corresponding to the calculation type may be the audio information corresponding to "add with __________ and __________"; when the calculation type is "factor, product", the formalized audio information corresponding to the calculation type may be the audio information corresponding to "multiplication by __________ and __________".
And 203, fusing the audio information and the animation display template added with the parameter analysis information to obtain an explanation video corresponding to the to-be-solved subject.
In a possible implementation manner, when the audio information is merged with the animation display template, the display duration of the display module corresponding to each piece of audio information in the animation display template may be determined based on the playing duration of the audio information, and then the audio information is merged with the animation display template based on the display duration of each display module to obtain the explanation video corresponding to the to-be-solved subject.
Specifically, in the explanation video, each display module can be sequentially displayed, when any display module is displayed, the audio information corresponding to the display module can be simultaneously played, and after the audio information corresponding to the display module is played, the next display module can be displayed.
In practical application, the explanation video corresponding to the to-be-solved subject may include two types of videos, one is an explanation video that is only displayed, and the other is an explanation video that interacts with the user, the explanation video that is only displayed may be generated based on the animation display template, when the subject information of the to-be-solved subject is acquired, a type selection instruction input by the user may be simultaneously acquired, if the explanation video that is only displayed is selected to be generated in the type selection instruction of the user, the explanation video is generated based on the above method, and if the explanation video that is interactive is selected to be generated in the selection instruction of the user, the following method may be referred to.
In a possible implementation manner, when an explanation video corresponding to a subject to be solved is generated based on parameter analysis information corresponding to each subject solving step of the subject to be solved, the explanation video may include two parts, one part is to determine at least one section of an interactive sub-video based on the parameter analysis information corresponding to each subject solving step of the subject to be solved, a subject solving sequence corresponding to each subject solving step, and a preset interactive animation display template; and the other part is that each explanation sub-video corresponding to the to-be-solved subject is generated based on the parameter analysis information corresponding to each problem solving step of the to-be-solved subject, the problem solving sequence corresponding to each problem solving step and a preset explanation template, and the interactive sub-video and the explanation sub-video form the explanation video.
The interactive sub-video is a sub-video that can be subsequently displayed only by the interaction between the user and the user side, for example, a display page of the interactive sub-video may be as shown in fig. 4, each derivation step displayed in the second location area requires the user to select, and the user may select the derivation step by triggering an option of the third location area, where the option includes the calculation significance of the calculation formula in the parameter analysis information corresponding to the problem solving step and the significance of the calculation parameter.
In practical application, if the user selects correctly, namely after the user triggers the computational significance of the computational formula in the analysis parameter information corresponding to the current problem solving step, prompt information can be added on the display page, the user is prompted to select correctly, and information corresponding to the option selected by the user is added in the display module in the second position area; if the user selects the wrong option, a prompt voice for prompting the user to select the wrong option can be played, the correct option is displayed, and the correct option is added into the display module of the second position area.
In one possible implementation, the presentation sequence of each interactive sub-video may be preset, and the explanation sub-video may be presented after the interactive sub-video is presented.
In a possible implementation manner, at least one text problem solving step corresponding to the to-be-solved item may be generated according to the problem solving sequence corresponding to each problem solving step and the analysis parameter information corresponding to each problem solving step, then each text problem solving step is sequentially displayed in the third location area (the audio information corresponding to the text problem solving step may be synchronously played while the text problem solving step is displayed), and then after any text problem solving step is completely displayed, a mark indicating that the calculation of the step is completed is added to the display module corresponding to the second location area.
For example, as shown in fig. 5, after displaying the speeds of the two vehicles and the corresponding calculation steps, a check mark may be added at the position of "speed sum of two vehicles" to indicate that the calculation of the step is completed.
In another possible implementation manner, the parsing content corresponding to the to-be-solved item may be generated based on the parameter parsing information corresponding to each problem solving step and a preset parsing template, where the parsing template may include a formal language matched with the parameter parsing information and the logic sequence of different problem solving steps.
In a specific implementation, the parsing content corresponding to the to-be-solved subject may include a first parsing content for representing a problem solving logic and a second parsing content for representing a detailed problem solving step. The parsing template is different in the formal language used for generating the first parsed content and the second parsed content.
In a possible implementation manner, the formal languages used for generating the first parsing content in the parsing templates corresponding to different to-be-solved questions may be the same; the second analysis content comprises analysis content corresponding to different problem solving steps, and the formal languages used for generating the analysis content corresponding to the different problem solving steps in the analysis template can be different.
Illustratively, still taking the question of the fruit shop in step 102 as an example, the problem solving template can be as shown in table 2:
TABLE 2
Figure BDA0002710458510000141
Figure BDA0002710458510000151
In specific implementation, the content to be supplemented in the table above can be perfected based on the analysis parameter information corresponding to each problem solving step and the problem solving sequence corresponding to each problem solving step.
Specifically, (1) correspondingly filling the calculation significance of the calculation formula in the third step; (2) filling the calculation significance of the calculation formula in the second step correspondingly; (3) filling the calculation significance of the calculation formula of the first step correspondingly; (4) processing the meaning of the calculation formula corresponding to the first step; (5) (6) correspondingly filling the meaning of each calculation parameter in the calculation formula of the first step; (7) (8) correspondingly filling the calculation result of the calculation formula of the first step and the meaning of the corresponding parameter; (9) (10) correspondingly filling the meaning of each calculation parameter in the calculation formula of the second step; (11) (12) correspondingly filling the calculation result of the calculation formula of the second step and the meaning of the corresponding parameter; (13) the meaning of the calculation result and the corresponding parameter of the calculation formula (question answer) corresponding to the third step is filled.
After the parsing parameter information corresponding to the solving step is sequentially added to the preset parsing template based on the solving sequence corresponding to the solving step, the text parsing content shown in table 3 can be obtained:
TABLE 3
Figure BDA0002710458510000152
Figure BDA0002710458510000161
Based on the same concept, the embodiment of the present disclosure further provides a video display method, as shown in fig. 6, which is a schematic flow chart of the video display method provided by the embodiment of the present disclosure, and the method includes the following steps:
601, responding to a preset trigger operation aiming at a to-be-solved question, and initiating an explanation request; wherein, the explanation request carries the subject information of the subject to be solved.
Step 602, receiving an explanation video corresponding to the subject to be solved.
603, displaying the explanation video through a user side; the explaining video comprises at least one solving step of the to-be-solved question and parameter analysis information corresponding to each solving step of the to-be-solved question.
The explanation video may include an interactive sub-video and an explanation sub-video, and in one possible implementation, the explanation sub-video or the interactive sub-video corresponding to a trigger operation of any interactive sub-video in the explanation video may be displayed in response to the trigger operation.
Specifically, a plurality of triggerable buttons may be displayed in the interactive sub-video, different trigger buttons correspond to different explanation sub-videos or interactive sub-videos, and after any button is triggered, the explanation sub-video or the interactive sub-video corresponding to the button may be displayed.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, a video generation device corresponding to the video generation method is also provided in the embodiments of the present disclosure, and since the principle of solving the problem of the device in the embodiments of the present disclosure is similar to the video generation method in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 7, there is shown a schematic architecture diagram of a video generating apparatus according to an embodiment of the present disclosure, where the apparatus includes: an obtaining module 701, a first generating module 702, and a second generating module 703; wherein the content of the first and second substances,
an obtaining module 701, configured to obtain topic information of a topic to be solved;
a first generating module 702, configured to generate at least one problem solving step of the to-be-solved object and parameter analysis information corresponding to each problem solving step of the to-be-solved object based on the problem information of the to-be-solved object and a pre-trained problem solving model;
a second generating module 703 is configured to generate an explanation video corresponding to the subject to be solved based on parameter analysis information corresponding to each problem solving step of the subject to be solved.
In a possible implementation manner, when generating an explanation video corresponding to the subject to be solved based on the parameter analysis information corresponding to each subject solving step of the subject to be solved, the second generating module 703 is configured to:
determining an animation display template matched with the to-be-solved problem;
adding the parameter analysis information to the animation display template based on the solving sequence corresponding to the solving step of the object to be solved; determining audio information corresponding to the parameter analysis information;
and fusing the audio information and the animation display template added with the parameter analysis information to obtain an explanation video corresponding to the subject to be solved.
In a possible implementation manner, the second generating module 703, when determining the animation demonstration template matching the subject to be solved, is configured to:
and determining the display attribute information of each display module in the animation display template based on at least one problem solving step of the to-be-solved problem, wherein the display attribute information comprises a display area and/or a display number.
In a possible implementation manner, when the audio information is fused with the animation display template to which the parameter analysis information is added, to obtain the explanation video corresponding to the subject to be solved, the second generating module 703 is configured to:
determining the display duration of a display module corresponding to each section of audio information in the animation display template based on the playing duration of the audio information;
and fusing the audio information and the animation display template based on the display duration of each display module to obtain an explanation video corresponding to the to-be-solved subject.
In a possible implementation manner, when generating an explanation video corresponding to the subject to be solved based on the parameter analysis information corresponding to each subject solving step of the subject to be solved, the second generating module 703 is configured to:
determining at least one section of interactive sub-video based on parameter analysis information corresponding to each problem solving step of the object to be solved, problem solving sequences corresponding to the problem solving steps and a preset interactive animation display template; and the number of the first and second groups,
and generating each explanation sub-video corresponding to the to-be-solved question based on the parameter analysis information corresponding to each problem solving step of the to-be-solved question, the problem solving sequence corresponding to each problem solving step and a preset explanation template, wherein the interactive sub-video and the explanation sub-video form the explanation video.
In a possible implementation, the parameter resolution information includes multiple information of the following information:
the method comprises the following steps of calculation formula, unit information, calculation meaning of the calculation formula, meaning of each calculation parameter in the calculation formula and calculation type corresponding to the calculation formula.
In a possible implementation manner, the second generating module 703 is further configured to:
generating analysis content corresponding to the to-be-solved item based on the parameter analysis information corresponding to each problem solving step and a preset analysis template; the parsing template contains a formalized language matched with the logic sequence and parameter parsing information of different problem solving steps.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Based on the same inventive concept, a video display apparatus corresponding to the video display method is also provided in the embodiments of the present disclosure, and since the principle of the apparatus in the embodiments of the present disclosure for solving the problem is similar to the video display method described above in the embodiments of the present disclosure, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not described again.
Referring to fig. 8, there is shown a schematic structural diagram of a video display apparatus according to an embodiment of the present disclosure, the apparatus includes: a response module 801, a receiving module 802 and a display module 803; wherein the content of the first and second substances,
a response module 801, configured to respond to a preset trigger operation for a topic to be solved and initiate an explanation request; wherein, the explanation request carries the subject information of the subject to be solved;
a receiving module 802, configured to receive an explanation video corresponding to the subject to be solved;
a display module 803, configured to display the explanation video through a user side; the explaining video comprises at least one solving step of the to-be-solved question and parameter analysis information corresponding to each solving step of the to-be-solved question.
In one possible embodiment, the explanation video comprises an interactive sub-video and an explanation sub-video;
the display module 803 is further configured to:
responding to a trigger operation aiming at any interactive sub-video in the explanation videos, and displaying the explanation sub-video or the interactive sub-video corresponding to the trigger operation.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 9, a schematic structural diagram of a computer device 900 provided in the embodiment of the present disclosure includes a processor 901, a memory 902, and a bus 903. The memory 902 is used for storing execution instructions, and includes a memory 9021 and an external memory 9022; the memory 9021 is also referred to as an internal memory, and is configured to temporarily store operation data in the processor 901 and data exchanged with an external memory 9022 such as a hard disk, the processor 901 exchanges data with the external memory 9022 through the memory 9021, and when the computer device 900 is operated, the processor 901 communicates with the memory 902 through the bus 903, so that the processor 901 executes the following instructions:
acquiring question information of a question to be solved;
generating at least one problem solving step of the to-be-solved object and parameter analysis information corresponding to each problem solving step of the to-be-solved object based on the problem information of the to-be-solved object and a pre-trained problem solving model;
and generating an explanation video corresponding to the to-be-solved subject based on the parameter analysis information corresponding to each problem solving step of the to-be-solved subject.
In a possible implementation manner, in the instructions executed by the processor 901, the generating an explanation video corresponding to the subject to be solved based on the parameter parsing information corresponding to each problem solving step of the subject to be solved includes:
determining an animation display template matched with the to-be-solved problem;
adding the parameter analysis information to the animation display template based on the solving sequence corresponding to the solving step of the object to be solved; determining audio information corresponding to the parameter analysis information;
and fusing the audio information and the animation display template added with the parameter analysis information to obtain an explanation video corresponding to the subject to be solved.
In a possible implementation, the instructions executed by the processor 901 for determining the animation display template matching the subject to be solved includes:
and determining the display attribute information of each display module in the animation display template based on at least one problem solving step of the to-be-solved problem, wherein the display attribute information comprises a display area and/or a display number.
In a possible implementation manner, in an instruction executed by the processor 901, the fusing the audio information and the animation display template to which the parameter parsing information is added to obtain an explanation video corresponding to the topic to be solved includes:
determining the display duration of a display module corresponding to each section of audio information in the animation display template based on the playing duration of the audio information;
and fusing the audio information and the animation display template based on the display duration of each display module to obtain an explanation video corresponding to the to-be-solved subject.
In a possible implementation manner, in the instructions executed by the processor 901, the generating an explanation video corresponding to the subject to be solved based on the parameter parsing information corresponding to each problem solving step of the subject to be solved includes:
determining at least one section of interactive sub-video based on parameter analysis information corresponding to each problem solving step of the object to be solved, problem solving sequences corresponding to the problem solving steps and a preset interactive animation display template; and the number of the first and second groups,
and generating each explanation sub-video corresponding to the to-be-solved question based on the parameter analysis information corresponding to each problem solving step of the to-be-solved question, the problem solving sequence corresponding to each problem solving step and a preset explanation template, wherein the interactive sub-video and the explanation sub-video form the explanation video.
In a possible implementation manner, in the instructions executed by the processor 901, the parameter resolution information includes multiple pieces of information among the following information:
the method comprises the following steps of calculation formula, unit information, calculation meaning of the calculation formula, meaning of each calculation parameter in the calculation formula and calculation type corresponding to the calculation formula.
In a possible implementation manner, in the instructions executed by the processor 901, the method further includes:
generating analysis content corresponding to the to-be-solved item based on the parameter analysis information corresponding to each problem solving step and a preset analysis template; the parsing template contains a formalized language matched with the logic sequence and parameter parsing information of different problem solving steps.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 10, a schematic structural diagram of a computer device 1000 provided in the embodiment of the present disclosure includes a processor 1001, a memory 1002, and a bus 1003. The memory 1002 is used for storing execution instructions, and includes a memory 10021 and an external memory 10022; the memory 10021 is also referred to as an internal memory, and is used for temporarily storing the operation data in the processor 1001 and the data exchanged with the external memory 10022 such as a hard disk, the processor 1001 exchanges data with the external memory 10022 through the memory 10021, and when the computer device 1000 operates, the processor 1001 and the memory 1002 communicate through the bus 1003, so that the processor 1001 executes the following instructions:
responding to a preset trigger operation aiming at the to-be-solved question, and initiating an explanation request; wherein, the explanation request carries the subject information of the subject to be solved;
receiving an explanation video corresponding to the to-be-solved subject;
displaying the explanation video through a user side; the explaining video comprises at least one solving step of the to-be-solved question and parameter analysis information corresponding to each solving step of the to-be-solved question.
In one possible embodiment, the processor 1001 executes instructions, wherein the explanation video includes an interactive sub-video and an explanation sub-video;
the method further comprises the following steps:
responding to a trigger operation aiming at any interactive sub-video in the explanation videos, and displaying the explanation sub-video or the interactive sub-video corresponding to the trigger operation.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the video generating and video displaying method in the foregoing method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product of the video generation and video display method provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the video generation and video display method described in the embodiments of the above methods, and reference may be specifically made to the embodiments of the above methods, which are not described herein again.
The embodiments of the present disclosure also provide a computer program, which when executed by a processor implements any one of the methods of the foregoing embodiments. The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (13)

1. A method of video generation, comprising:
acquiring question information of a question to be solved;
generating at least one problem solving step of the to-be-solved object and parameter analysis information corresponding to each problem solving step of the to-be-solved object based on the problem information of the to-be-solved object and a pre-trained problem solving model;
and generating an explanation video corresponding to the to-be-solved subject based on the parameter analysis information corresponding to each problem solving step of the to-be-solved subject.
2. The method according to claim 1, wherein the generating an explanation video corresponding to the subject to be solved based on the parameter parsing information corresponding to each subject solving step of the subject to be solved comprises:
determining an animation display template matched with the to-be-solved problem;
adding the parameter analysis information to the animation display template based on the solving sequence corresponding to the solving step of the object to be solved; determining audio information corresponding to the parameter analysis information;
and fusing the audio information and the animation display template added with the parameter analysis information to obtain an explanation video corresponding to the subject to be solved.
3. The method of claim 2, wherein the determining an animation demonstration template matching the problem to be solved comprises:
and determining the display attribute information of each display module in the animation display template based on at least one problem solving step of the to-be-solved problem, wherein the display attribute information comprises a display area and/or a display number.
4. The method according to claim 3, wherein the fusing the audio information with the animation display template to which the parameter parsing information is added to obtain the explanation video corresponding to the subject to be solved comprises:
determining the display duration of a display module corresponding to each section of audio information in the animation display template based on the playing duration of the audio information;
and fusing the audio information and the animation display template based on the display duration of each display module to obtain an explanation video corresponding to the to-be-solved subject.
5. The method according to claim 1, wherein the generating an explanation video corresponding to the subject to be solved based on the parameter parsing information corresponding to each subject solving step of the subject to be solved comprises:
determining at least one section of interactive sub-video based on parameter analysis information corresponding to each problem solving step of the object to be solved, problem solving sequences corresponding to the problem solving steps and a preset interactive animation display template; and the number of the first and second groups,
and generating each explanation sub-video corresponding to the to-be-solved question based on the parameter analysis information corresponding to each problem solving step of the to-be-solved question, the problem solving sequence corresponding to each problem solving step and a preset explanation template, wherein the interactive sub-video and the explanation sub-video form the explanation video.
6. The method of claim 1, wherein the parameter resolution information comprises a plurality of information selected from the following:
the method comprises the following steps of calculation formula, unit information, calculation meaning of the calculation formula, meaning of each calculation parameter in the calculation formula and calculation type corresponding to the calculation formula.
7. The method of claim 1, further comprising:
generating analysis content corresponding to the to-be-solved item based on the parameter analysis information corresponding to each problem solving step and a preset analysis template; the parsing template contains a formalized language matched with the logic sequence and parameter parsing information of different problem solving steps.
8. A method for video presentation, comprising:
responding to a preset trigger operation aiming at the to-be-solved question, and initiating an explanation request; wherein, the explanation request carries the subject information of the subject to be solved;
receiving an explanation video corresponding to the to-be-solved subject;
displaying the explanation video through a user side; the explaining video comprises at least one solving step of the to-be-solved question and parameter analysis information corresponding to each solving step of the to-be-solved question.
9. The method of claim 8, wherein the explanation video comprises an interactive sub-video and an explanation sub-video;
the method further comprises the following steps:
responding to a trigger operation aiming at any interactive sub-video in the explanation videos, and displaying the explanation sub-video or the interactive sub-video corresponding to the trigger operation.
10. A video generation apparatus, comprising:
the acquisition module is used for acquiring the question information of the questions to be solved;
the first generation module is used for generating at least one problem solving step of the to-be-solved problem and parameter analysis information corresponding to each problem solving step of the to-be-solved problem based on the problem information of the to-be-solved problem and a pre-trained problem solving model;
and the second generation module is used for generating an explanation video corresponding to the to-be-solved question based on the parameter analysis information corresponding to each problem solving step of the to-be-solved question.
11. A video presentation apparatus, comprising:
the response module is used for responding to preset trigger operation aiming at the to-be-solved question and initiating an explanation request; wherein, the explanation request carries the subject information of the subject to be solved;
the receiving module is used for receiving the explanation video corresponding to the to-be-solved subject;
the display module is used for displaying the explanation video through a user side; the explaining video comprises at least one solving step of the to-be-solved question and parameter analysis information corresponding to each solving step of the to-be-solved question.
12. A computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when a computer device is run, the machine-readable instructions when executed by the processor performing the steps of the video generation method of any one of claims 1 to 7 or performing the steps of the video presentation method of any one of claims 8 to 9.
13. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, is adapted to carry out the steps of the video generation method according to any one of claims 1 to 7 or the steps of the video presentation method according to any one of claims 8 to 9.
CN202011054276.XA 2020-09-30 2020-09-30 Video generation method, video display method and device Pending CN112188232A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011054276.XA CN112188232A (en) 2020-09-30 2020-09-30 Video generation method, video display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011054276.XA CN112188232A (en) 2020-09-30 2020-09-30 Video generation method, video display method and device

Publications (1)

Publication Number Publication Date
CN112188232A true CN112188232A (en) 2021-01-05

Family

ID=73945986

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011054276.XA Pending CN112188232A (en) 2020-09-30 2020-09-30 Video generation method, video display method and device

Country Status (1)

Country Link
CN (1) CN112188232A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112447073A (en) * 2020-12-11 2021-03-05 北京有竹居网络技术有限公司 Explanation video generation method, explanation video display method and device
CN115035756A (en) * 2021-03-08 2022-09-09 北京有竹居网络技术有限公司 Method and device for generating English problem solving video, electronic equipment and storage medium
CN116682299A (en) * 2023-05-30 2023-09-01 武汉木仓科技股份有限公司 Question bank system with function of arousing coaching at any time and learning method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105788373A (en) * 2015-12-24 2016-07-20 广东小天才科技有限公司 Animation teaching method and system
CN106227335A (en) * 2016-07-14 2016-12-14 广东小天才科技有限公司 Preview teaching materials and the interactive learning method of video classes and Applied Learning client
CN109003478A (en) * 2018-08-07 2018-12-14 广东小天才科技有限公司 A kind of learning interaction method and facility for study
CN110378812A (en) * 2019-05-20 2019-10-25 北京师范大学 A kind of adaptive on-line education system and method
CN110750624A (en) * 2019-10-30 2020-02-04 百度在线网络技术(北京)有限公司 Information output method and device
CN111369403A (en) * 2020-02-27 2020-07-03 北京字节跳动网络技术有限公司 Problem solving demonstration method and device
US20200250608A1 (en) * 2019-01-31 2020-08-06 Dell Products L.P. Providing feedback by evaluating multi-modal data using machine learning techniques

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105788373A (en) * 2015-12-24 2016-07-20 广东小天才科技有限公司 Animation teaching method and system
CN106227335A (en) * 2016-07-14 2016-12-14 广东小天才科技有限公司 Preview teaching materials and the interactive learning method of video classes and Applied Learning client
CN109003478A (en) * 2018-08-07 2018-12-14 广东小天才科技有限公司 A kind of learning interaction method and facility for study
US20200250608A1 (en) * 2019-01-31 2020-08-06 Dell Products L.P. Providing feedback by evaluating multi-modal data using machine learning techniques
CN110378812A (en) * 2019-05-20 2019-10-25 北京师范大学 A kind of adaptive on-line education system and method
CN110750624A (en) * 2019-10-30 2020-02-04 百度在线网络技术(北京)有限公司 Information output method and device
CN111369403A (en) * 2020-02-27 2020-07-03 北京字节跳动网络技术有限公司 Problem solving demonstration method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112447073A (en) * 2020-12-11 2021-03-05 北京有竹居网络技术有限公司 Explanation video generation method, explanation video display method and device
CN115035756A (en) * 2021-03-08 2022-09-09 北京有竹居网络技术有限公司 Method and device for generating English problem solving video, electronic equipment and storage medium
CN116682299A (en) * 2023-05-30 2023-09-01 武汉木仓科技股份有限公司 Question bank system with function of arousing coaching at any time and learning method

Similar Documents

Publication Publication Date Title
CN112188232A (en) Video generation method, video display method and device
CN111353037B (en) Topic generation method and device and computer readable storage medium
US10963760B2 (en) Method and apparatus for processing information
CN111343496A (en) Video processing method and device
CN112183048A (en) Automatic problem solving method and device, computer equipment and storage medium
CN109801527B (en) Method and apparatus for outputting information
CN105989112B (en) A kind of method and server of application program classification
CN111339420A (en) Image processing method, image processing device, electronic equipment and storage medium
CN108877334B (en) Voice question searching method and electronic equipment
CN109978139B (en) Method, system, electronic device and storage medium for automatically generating description of picture
CN113536172B (en) Encyclopedia information display method and device and computer storage medium
CN109783613B (en) Question searching method and system
CN113344754A (en) Teaching plan generation method and device, computer equipment and storage medium
CN112447073A (en) Explanation video generation method, explanation video display method and device
CN113705653A (en) Model generation method and device, electronic device and storage medium
CN113850898A (en) Scene rendering method and device, storage medium and electronic equipment
CN111859970B (en) Method, apparatus, device and medium for processing information
CN117033599A (en) Digital content generation method and related equipment
CN116662496A (en) Information extraction method, and method and device for training question-answering processing model
CN111737288B (en) Search control method, device, terminal equipment, server and storage medium
CN114528494A (en) Information pushing method, device, equipment and storage medium
CN114550545A (en) Course generation method, course display method and device
CN109145284A (en) Information processing method and device
CN113806511A (en) Live broadcast question-answer interaction method, device, equipment and storage medium
KR101520755B1 (en) Smart-device with app possible Learning using OMR Solving and Just Solving and OX Quiz.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210105

RJ01 Rejection of invention patent application after publication