CN111246248A - Statement meaning dynamic presentation method and device, electronic equipment and storage medium - Google Patents

Statement meaning dynamic presentation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111246248A
CN111246248A CN202010063274.0A CN202010063274A CN111246248A CN 111246248 A CN111246248 A CN 111246248A CN 202010063274 A CN202010063274 A CN 202010063274A CN 111246248 A CN111246248 A CN 111246248A
Authority
CN
China
Prior art keywords
meaning
sentence
key frame
group
frame images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010063274.0A
Other languages
Chinese (zh)
Inventor
徐利民
魏淑芳
陆勇
姜俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Topronin Beijing Education Technology Co Ltd
Original Assignee
Topronin Beijing Education Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Topronin Beijing Education Technology Co Ltd filed Critical Topronin Beijing Education Technology Co Ltd
Priority to CN202010063274.0A priority Critical patent/CN111246248A/en
Publication of CN111246248A publication Critical patent/CN111246248A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a dynamic presentation method and device of statement meanings, electronic equipment and a storage medium, and provides a new dynamic presentation mode of statement meanings. The dynamic presentation method of sentence meaning comprises the following steps: acquiring an action video generated according to the meaning of the target statement; extracting a group of key frame images from the action video; a frame animation is generated from a set of key frame images.

Description

Statement meaning dynamic presentation method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of image data processing, in particular to a dynamic presentation method and device of statement meanings, electronic equipment and a storage medium.
Background
In the prior art, in order to help users understand and memorize some sentences, a mode of still pictures or short videos is generally adopted. However, the still picture can only show real nouns, and has no good effect on verbs or phrases; although the short video is rich and vivid in expression effect, more interference information such as ambient environment information is often mixed, the attention of users is dispersed, and meanwhile, a large amount of traffic cost is consumed.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for dynamically presenting a sentence meaning, an electronic device, and a storage medium, so as to solve the problem in the prior art that a display effect is not good when displaying a sentence meaning by using two display modes, i.e., a static picture and a short video.
The invention provides a statement meaning dynamic presentation method in a first aspect, which comprises the following steps: acquiring an action video generated according to the meaning of the target statement; extracting a group of key frame images from the action video; a frame animation is generated from a set of key frame images.
In one embodiment, before extracting a group of key frame images from the motion video, the method further comprises: and preprocessing the motion video.
In one embodiment, pre-processing the motion video includes: and uniformly adjusting at least one of the format, the size and the duration of the action video.
In one embodiment, pre-processing the motion video includes: adjusting the action video into an MP4 format; and/or resizing the motion video to 356 x 200 dpi; and/or adjusting the duration of the motion video to within 6 seconds.
In one embodiment, extracting a set of key frame images from the motion video comprises: a set of key frame images is extracted from the action video using video editing software.
In one embodiment, after extracting a group of key frame images from the motion video, the method further comprises: checking whether the playing effect of a group of key frame images meets the preset playing requirement.
In one embodiment, before acquiring the action video generated according to the meaning of the target sentence, the method further comprises: splitting the meaning of the target statement to obtain at least one scene description statement; generating action script data according to at least one scene description statement; and generating the action video according to the action script data.
In one embodiment, the target sentence includes at least one of a verb and a verbalized phrase.
The second aspect of the present invention provides a device for dynamically presenting sentence meanings, comprising: the acquisition module acquires an action video generated according to the meaning of the target statement; the extraction module extracts a group of key frame images from the motion video; and the generating module generates frame animation according to the group of key frame images.
In one embodiment, the system further comprises a preprocessing module for preprocessing the motion video.
In one embodiment, the preprocessing module is specifically configured to perform a unified adjustment on at least one of a format, a size, and a duration of the motion video.
In one embodiment, the pre-processing module is specifically configured to adapt the motion video to MP4 format; and/or, resizing the motion video to 356 x 200 dpi; and/or adjusting the duration of the motion video to be within 6 seconds.
In one embodiment, the extraction module is specifically configured to extract a set of key frame images from the motion video using pr software.
In one embodiment, the system further comprises a checking module, configured to check whether a playing effect of a group of key frame images meets a predetermined playing requirement; wherein generating a frame animation from a set of key frame images comprises: and under the condition that the playing effect of the group of key frame images meets the preset playing requirement, generating a frame animation according to the group of key frame images.
In one embodiment, the system further comprises a splitting module, a first generation submodule and a second generation submodule. The splitting module is used for splitting the meaning of the target statement to obtain at least one scene description statement; the first generation submodule is used for generating action script data according to at least one scene description statement; and the second generation submodule is used for generating the action video according to the action script data.
A third aspect of the present invention provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executed by the processor, wherein the processor implements the steps of the method for dynamically presenting sentence meanings provided in any of the above embodiments when executing the computer program.
A fourth aspect of the present invention provides a storage medium on which a computer program is stored, the computer program, when executed by a processor, implementing the steps of the method for dynamically representing meaning of a sentence provided in any of the embodiments described above.
According to the statement meaning dynamic presentation method and device, the electronic equipment and the storage medium provided by the invention, a group of key frame images in the action video are selected to make a frame animation, and the statement meaning is dynamically presented in the form of the frame animation. On one hand, a large amount of redundant information in the short video is removed, so that the focusing of the attention of a user is facilitated, and meanwhile, the flow cost is reduced; on the other hand, compared with a static picture presentation mode, the dynamic picture presentation mode has richer and vivid presentation effect and strengthens memory.
Drawings
Fig. 1 shows an exemplary system architecture to which the method and apparatus for dynamically presenting sentence meanings provided by the embodiments of the present invention can be applied.
Fig. 2 is a flowchart of a method for dynamically presenting sentence meanings according to a first embodiment of the present invention.
Fig. 3 is a flowchart of a sentence meaning dynamic presentation method according to a second embodiment of the present invention.
Fig. 4 is a flowchart of a sentence meaning dynamic presentation method according to a third embodiment of the present invention.
Fig. 5 is a flowchart of a sentence meaning dynamic presentation method according to a fourth embodiment of the present invention.
Fig. 6 is a block diagram of a device for dynamically presenting sentence meanings according to a first embodiment of the present invention.
Fig. 7 is a block diagram of a sentence meaning dynamic presentation device according to a second embodiment of the present invention.
Fig. 8 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 shows an exemplary system architecture to which the method and apparatus for dynamically presenting sentence meanings provided by the embodiments of the present invention can be applied. The system architecture can be used in a school or network teaching system, and as shown in fig. 1, the system architecture 100 includes a terminal device 101, a network 102, and a server 103.
The network 102 is used as a medium for providing a communication link between the terminal device 101 and the server 102. Network 102 includes various types of connections, such as wired communication links, wireless communication links, or fiber optic cables. The terminal device 101 may be a variety of electronic devices having a display screen including, but not limited to, smart phones, tablets, portable computers, desktop computers, and the like. The server 103 may be a server that provides various services. In this way, a user can use the terminal device 101 to interact with the server 103 through the network 102 to receive or send messages.
In one embodiment, the dynamic presentation method of the sentence meaning provided by the embodiment of the present invention is executed by the server 103, and accordingly, the dynamic presentation device of the sentence meaning is disposed in the server. When the server runs a program to execute the dynamic presentation method of the sentence meaning provided by the embodiment of the invention, the frame animation for paraphrasing the sentence meaning is generated. The subsequent server 103 transmits the frame animation to the terminal apparatus 101 in response to the request of the terminal apparatus 101, and the terminal apparatus 101 plays the frame animation. In other embodiments, some terminal devices 101 have a function similar to a server, so as to execute the steps of the dynamic presentation method of the sentence meaning, and are provided with a dynamic presentation means of the corresponding sentence meaning.
It should be understood that the number of terminal devices 101, networks 102, and servers 103 shown in fig. 1 is merely illustrative. Any number of terminal devices 101, networks 102, and servers 103 may be provided according to actual needs. For example, the server 103 may be a server cluster composed of a plurality of servers.
Fig. 2 is a flowchart of a method for dynamically presenting sentence meanings according to a first embodiment of the present invention. As shown in fig. 2, the method 200 for dynamically presenting the meaning of the sentence includes:
step S210, an action video generated according to the meaning of the target sentence is acquired.
The target sentence may be a word, phrase, or sentence. Since the dynamic rendering method includes a time element, the method is more applicable to a target sentence including a time element, such as a verb or verb phrase.
The motion video includes a subject object and a motion represented by the subject object. The subject object may be a physical object, such as a real person, or a virtual object, such as an animated character. The action of the subject object should reflect the meaning of the target sentence to ensure that the action video plays a role in interpreting the meaning of the target sentence.
In step S220, a group of key frame images is extracted from the motion video.
The group of key frame images refers to a sequence of frame images capable of completely reflecting the meaning of the target sentence. When different extraction strategies are adopted, a group of key frame images extracted from the same action video are different. However, considering that motion videos may include two frame images of the same content, even when different extraction strategies are employed, a set of key frame images extracted from the same motion video may be the same.
In one embodiment, step S220 is specifically performed as: and extracting a key frame image from the action video at preset time intervals, and traversing the whole action video to obtain a group of key frame images. The predetermined time here may be set manually.
In one embodiment, a set of keyframe images is extracted from the action video using video editing software, such as pr (adobe premiere) software.
In step S230, a frame animation is generated according to a group of key frame images.
Any one of gif, JavaScript, CSS3 Animation may be used to implement the frame Animation, and the specific process is the prior art and is not described herein.
According to the dynamic presentation method of the sentence meaning provided by the embodiment, a group of key frame images in the action video are selected to make a frame animation, and the sentence meaning is dynamically presented in the form of the frame animation. On one hand, a large amount of redundant information in a single short video is removed, so that the focusing of the attention of a user is facilitated, and meanwhile, the flow cost is reduced; on the other hand, compared with a static picture presentation mode, the dynamic picture presentation mode has richer and vivid presentation effect and strengthens memory.
Fig. 3 is a flowchart of a sentence meaning dynamic presentation method according to a second embodiment of the present invention. As shown in fig. 3, the dynamic rendering method 300 differs from the dynamic rendering method 200 shown in fig. 2 only in that the dynamic rendering method 300 further includes, before step S220: step S310, pre-processing the motion video.
The preprocessing operation mentioned here includes uniformly adjusting at least one of the format, size and duration of the motion video so that a group of key frames subsequently extracted from the motion video can satisfy the production requirement of the frame animation. For example, in one embodiment, step S310 is specifically performed as: adjusting the action video into an MP4 format; and/or, resizing the motion video to 356 x 200 dpi; and/or adjusting the duration of the motion video to be within 6 seconds.
According to the dynamic presentation method of the sentence meaning provided by the embodiment, before the frame extraction operation is performed on the motion video, the motion video is preprocessed, so that the subsequent frame animation can be conveniently produced.
Fig. 4 is a flowchart of a sentence meaning dynamic presentation method according to a third embodiment of the present invention. As shown in fig. 4, the dynamic rendering method 400 only differs from the dynamic rendering method 200 shown in fig. 2 in that the dynamic rendering method 400 further includes, after step S220: step S410, check whether the playing effect of a group of key frame images meets the predetermined playing requirement. In this case, step S230 is specifically executed as step S420: and under the condition that the playing effect of the group of key frame images meets the preset playing requirement, generating a frame animation according to the group of key frame images.
The predetermined play requirement mentioned here includes whether the play effect is clear and smooth enough to satisfy the viewing requirement. In one embodiment, the ACDSee software is used to perform a slide presentation on a group of key frame images, and if the slide presentation effect can meet the viewing requirement, it is determined that the extracted group of key frame images can be used for producing the animation of the subsequent frame of the user.
According to the dynamic presentation method of the sentence meaning provided by the embodiment, the group of key frame images are pre-played before the group of key frame images are made into the frame animation, whether the extracted group of key frames meet the requirements or not is judged through the pre-playing effect, and a guarantee is provided for the next frame animation making process.
In one embodiment, before step S410, the method further includes: it is checked whether the number of a group of key frame images is equal to or less than a predetermined number. The predetermined number of key frame images in a group may be set artificially, for example, 30 frames.
Fig. 5 is a flowchart of a sentence meaning dynamic presentation method according to a fourth embodiment of the present invention. As shown in fig. 5, the dynamic rendering method 500 differs from the dynamic rendering method 200 shown in fig. 2 only in that the dynamic rendering method 500 further includes, before step S210:
step S510, splitting the meaning of the target sentence to obtain at least one scene description sentence.
The meaning of each target sentence can be expressed through a corresponding scene, and in order to perform key point refinement on the scene, namely to simplify the scene as much as possible on the basis of ensuring that the scene can completely express the meaning of the target sentences, the scene can be described by at least one scene description sentence.
For example, the target statement is: the addressed up, corresponding to at least one scene description statement, includes: in a first scene, a child wears a beautiful skirt in front of a mirror; the mother puts clothes in order for the children and helps the children to check whether the clothes are worn orderly. And in the second scene, after the mirror is arranged, the child is happy to be in front of the mirror and turns for several circles.
For another example, the target statement is: get up, wherein the corresponding at least one scene description statement comprises: in scenario one, the person lies on the bed to get up and leave.
In one embodiment, step S510 is specifically performed as: firstly, segmenting a target sentence based on a keyword identification technology, thereby identifying at least one keyword contained in the target sentence; then, the identified at least one keyword is input into a deep neural network to obtain at least one scene description sentence corresponding to the at least one keyword.
Step S520, generating action script data according to at least one scene description statement.
The action script data includes parameter information required for generating the action video, including, for example, props, numbers of people, persons, places, voice-over contents, and the like required for presenting at least one scene description sentence.
Step S530, generate an action video from the action script data.
In one embodiment, the action video is generated based on the preset virtual image and combined with the action script.
In one embodiment, a live video is captured as the motion video according to the motion script data.
According to the dynamic presentation method of sentence meanings provided by the embodiment, the action video is generated according to the action script. Since the action script is obtained according to at least one scene description statement obtained by splitting the meaning of the target statement, the at least one scene description statement is extracted from key points of the meaning of the target statement, and therefore redundant information in the action video is reduced.
The invention also provides a device for dynamically presenting the sentence meanings. Fig. 6 is a block diagram of a device for dynamically presenting sentence meanings according to a first embodiment of the present invention. As shown in fig. 6, the apparatus 60 for dynamically presenting sentence meanings includes an obtaining module 61, an extracting module 62 and a generating module 63. The obtaining module 61 is configured to obtain an action video generated according to the meaning of the target sentence. The extraction module 62 is used to extract a set of key frame images from the motion video. The generating module 63 is used to generate a frame animation from a set of key frame images.
In one embodiment, the extraction module 62 is specifically configured to extract a key frame image from the motion video at predetermined time intervals, and traverse the entire motion video to obtain a group of key frame images.
In one embodiment, the extraction module 62 is specifically configured to extract a set of key frame images from the motion video using pr software.
According to the dynamic presentation device for the sentence meaning provided by the embodiment, a group of key frame images in the action video are selected to make a frame animation, and the sentence meaning is dynamically presented in the form of the frame animation. On one hand, a large amount of redundant information in the short video is removed, so that the attention focusing of a user is facilitated, and meanwhile, the flow cost is reduced; on the other hand, compared with a static picture presentation mode, the dynamic picture presentation mode has richer and vivid presentation effect and strengthens memory.
Fig. 7 is a block diagram of a sentence meaning dynamic presentation device according to a second embodiment of the present invention. As shown in fig. 7, the sentence meaning dynamic presenting device 70 further includes a preprocessing module 71 and a checking module 72 on the basis of the sentence meaning dynamic presenting device 60 shown in fig. 6. The preprocessing module 71 is used for preprocessing the motion video. The checking module 72 is used for checking whether the playing effect of a group of key frame images meets a predetermined playing requirement.
The preprocessing operations performed by the preprocessing module 71 include a unified adjustment of at least one of the format, size and duration of the motion video. For example, the motion video is adjusted to MP4 format; and/or, resizing the motion video to 356 x 200 dpi; and/or adjusting the duration of the motion video to be within 6 seconds.
The predetermined playing requirement includes whether the playing effect is clear and smooth enough to meet the watching requirement. In one embodiment, a slide presentation of a set of key frame images is performed using ACDSee software.
In one embodiment, the apparatus 70 for dynamically presenting the meaning of a sentence further comprises a splitting module, a first generating sub-module and a second generating sub-module. The splitting module is used for splitting the meaning of the target statement to obtain at least one scene description statement. The first generation submodule is used for generating action script data according to at least one scene description statement. And the second generation submodule is used for generating the action video according to the action script data.
The dynamic presentation device for sentence meaning provided by this embodiment belongs to the same inventive concept as the dynamic presentation method for sentence meaning provided by the embodiment of the present invention, can execute the dynamic presentation method for sentence meaning provided by any embodiment of the present invention, and has the corresponding functional modules and beneficial effects of the dynamic presentation method for sentence meaning. For technical details that are not described in detail in this embodiment, reference may be made to the dynamic presentation method for sentence meaning provided in this embodiment of the present invention, and details are not described here again.
It should be understood that although several modules or units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, according to exemplary embodiments of the invention, the features and functions of two or more modules/units described above may be implemented in one module/unit, whereas the features and functions of one module/unit described above may be further divided into implementations by a plurality of modules/units. Furthermore, some of the modules/units described above may be omitted in some application scenarios.
The invention also provides the electronic equipment. Fig. 8 is a block diagram of an electronic device according to an embodiment of the present invention. The electronic device 80 may be either or both of the terminal device 101 and the server 103 shown in fig. 1, or a stand-alone device separate therefrom that may communicate with the terminal device 101 and the server 103 to receive the collected input signals therefrom.
As shown in fig. 8, the electronic device 80 includes one or more processors 81 and memory 82.
The processor 81 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 80 to perform desired functions.
Memory 82 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by the processor 81 to implement the above-described dynamic presentation method of sentence meaning of the various embodiments of the application and/or other desired functionality. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device 80 may further include: an input device 83 and an output device 84, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, when the electronic device 80 is the terminal device 101 or the server 103, the input device 83 may be a camera for capturing an input signal of a video picture. When the electronic device is a stand-alone device, the input means 83 may be a communication network connector for receiving the collected input signals from the terminal device 101 and the server 103.
The input device 83 may also include, for example, a keyboard, a mouse, and the like.
The output device 84 may output various information including the determined distance information, direction information, and the like to the outside. The output devices 84 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, among others.
Of course, for simplicity, only some of the components of the electronic device 80 relevant to the present application are shown in fig. 8, and components such as buses, input/output interfaces, and the like are omitted. In addition, the electronic device 80 may include any other suitable components depending on the particular application.
The present invention also provides a storage medium on which a computer program is stored, which, when being executed by a processor, implements the steps of the method for dynamically representing meaning of a sentence provided by any of the above-mentioned embodiments of the present invention.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and the like that are within the spirit and principle of the present invention are included in the present invention.

Claims (11)

1. A method for dynamically presenting a meaning of a sentence, comprising:
acquiring an action video generated according to the meaning of the target statement;
extracting a group of key frame images from the action video;
and generating a frame animation according to the group of key frame images.
2. The method for dynamically representing sentence meaning according to claim 1, further comprising, before extracting a group of key frame images from the motion video:
and preprocessing the action video.
3. The method for dynamically representing sentence meanings of claim 2, wherein the pre-processing the action video comprises:
and uniformly adjusting at least one of the format, the size and the duration of the action video.
4. The method for dynamically representing sentence meanings of claim 2, wherein the pre-processing the action video comprises:
adjusting the motion video into an MP4 format; and/or
Resizing the motion video to be 356 x 200 dpi; and/or
And adjusting the duration of the action video to be within 6 seconds.
5. The method for dynamically representing sentence meaning according to claim 1, wherein the extracting a group of key frame images from the action video comprises:
and extracting a group of key frame images from the action video by using video editing software.
6. The method for dynamically representing sentence meaning according to any one of claims 1 to 5, further comprising, after extracting a group of key frame images from the motion video:
checking whether the playing effect of the group of key frame images meets the preset playing requirement,
wherein said generating a frame animation from said set of keyframe images comprises:
and under the condition that the playing effect of the group of key frame images meets the preset playing requirement, generating a frame animation according to the group of key frame images.
7. The method for dynamically presenting sentence meaning of any one of claims 1 to 5, further comprising, before the obtaining of the action video generated according to the meaning of the target sentence:
splitting the meaning of the target statement to obtain at least one scene description statement;
generating action script data according to the at least one scene description statement;
and generating the action video according to the action script data.
8. The method of dynamic presentation of a sentence meaning of any of claims 1 to 5 wherein the target sentence comprises at least one of a verb and a verb phrase.
9. An apparatus for dynamically presenting a meaning of a sentence, comprising:
the acquisition module acquires an action video generated according to the meaning of the target statement;
the extraction module extracts a group of key frame images from the action video;
and the generating module generates frame animation according to the group of key frame images.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory for execution by the processor, characterized in that the processor implements the steps of the method for dynamic presentation of sentence meaning according to any of claims 1 to 8 when executing the computer program.
11. A storage medium on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of a method for dynamic presentation of a meaning of a sentence according to any one of claims 1 to 8.
CN202010063274.0A 2020-01-19 2020-01-19 Statement meaning dynamic presentation method and device, electronic equipment and storage medium Pending CN111246248A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010063274.0A CN111246248A (en) 2020-01-19 2020-01-19 Statement meaning dynamic presentation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010063274.0A CN111246248A (en) 2020-01-19 2020-01-19 Statement meaning dynamic presentation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111246248A true CN111246248A (en) 2020-06-05

Family

ID=70868165

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010063274.0A Pending CN111246248A (en) 2020-01-19 2020-01-19 Statement meaning dynamic presentation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111246248A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070214418A1 (en) * 2006-03-10 2007-09-13 National Cheng Kung University Video summarization system and the method thereof
CN101194289A (en) * 2005-06-10 2008-06-04 松下电器产业株式会社 Scenario generation device, scenario generation method, and scenario generation program
CN101477699A (en) * 2008-01-04 2009-07-08 白涛 Basic programming method for converting literal sentences into corresponding animation cartoons
CN106791480A (en) * 2016-11-30 2017-05-31 努比亚技术有限公司 A kind of terminal and video skimming creation method
US20180101504A1 (en) * 2016-10-07 2018-04-12 Joseph DiTomaso System and method for transposing web content
CN110457673A (en) * 2019-06-25 2019-11-15 北京奇艺世纪科技有限公司 A kind of natural language is converted to the method and device of sign language

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101194289A (en) * 2005-06-10 2008-06-04 松下电器产业株式会社 Scenario generation device, scenario generation method, and scenario generation program
US20070214418A1 (en) * 2006-03-10 2007-09-13 National Cheng Kung University Video summarization system and the method thereof
CN101477699A (en) * 2008-01-04 2009-07-08 白涛 Basic programming method for converting literal sentences into corresponding animation cartoons
US20180101504A1 (en) * 2016-10-07 2018-04-12 Joseph DiTomaso System and method for transposing web content
CN106791480A (en) * 2016-11-30 2017-05-31 努比亚技术有限公司 A kind of terminal and video skimming creation method
CN110457673A (en) * 2019-06-25 2019-11-15 北京奇艺世纪科技有限公司 A kind of natural language is converted to the method and device of sign language

Similar Documents

Publication Publication Date Title
CN108010112B (en) Animation processing method, device and storage medium
US11386933B2 (en) Image information processing method and apparatus, and storage medium
CN109218629B (en) Video generation method, storage medium and device
CN107979763B (en) Virtual reality equipment video generation and playing method, device and system
CN105872820A (en) Method and device for adding video tag
CN111629253A (en) Video processing method and device, computer readable storage medium and electronic equipment
CN111050023A (en) Video detection method and device, terminal equipment and storage medium
CN111107422A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN112866577B (en) Image processing method and device, computer readable medium and electronic equipment
CN112785669B (en) Virtual image synthesis method, device, equipment and storage medium
US20230291978A1 (en) Subtitle processing method and apparatus of multimedia file, electronic device, and computer-readable storage medium
CN111654715A (en) Live video processing method and device, electronic equipment and storage medium
US20180143741A1 (en) Intelligent graphical feature generation for user content
CN111949908A (en) Media information processing method and device, electronic equipment and storage medium
CN106408623A (en) Character presentation method, device and terminal
CN117376660A (en) Subtitle element rendering method, device, equipment, medium and program product
CN112989112B (en) Online classroom content acquisition method and device
CN116528015A (en) Digital human video generation method and device, electronic equipment and storage medium
US12058410B2 (en) Information play control method and apparatus, electronic device, computer-readable storage medium and computer program product
CN116962848A (en) Video generation method, device, terminal, storage medium and product
CN111147894A (en) Sign language video generation method, device and system
CN111246248A (en) Statement meaning dynamic presentation method and device, electronic equipment and storage medium
US11532111B1 (en) Systems and methods for generating comic books from video and images
CN117319736A (en) Video processing method, device, electronic equipment and storage medium
CN113867874A (en) Page editing and displaying method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200605