WO2022083132A1 - 一种基于文字段落的动画草稿生成方法与装置 - Google Patents

一种基于文字段落的动画草稿生成方法与装置 Download PDF

Info

Publication number
WO2022083132A1
WO2022083132A1 PCT/CN2021/098990 CN2021098990W WO2022083132A1 WO 2022083132 A1 WO2022083132 A1 WO 2022083132A1 CN 2021098990 W CN2021098990 W CN 2021098990W WO 2022083132 A1 WO2022083132 A1 WO 2022083132A1
Authority
WO
WIPO (PCT)
Prior art keywords
semantics
speech
group
constituent
animation
Prior art date
Application number
PCT/CN2021/098990
Other languages
English (en)
French (fr)
Inventor
邵猛
魏博
Original Assignee
深圳市前海手绘科技文化有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市前海手绘科技文化有限公司 filed Critical 深圳市前海手绘科技文化有限公司
Publication of WO2022083132A1 publication Critical patent/WO2022083132A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/253Grammatical analysis; Style critique

Definitions

  • the invention belongs to the technical field of hand-painted videos, and in particular relates to a method, device, electronic device and storage medium for generating an animation draft based on text paragraphs.
  • the traditional method of creating short videos based on text paragraphs is to manually search or create corresponding materials according to the content in the text paragraphs, and then make layout planning on the type, position, shape, appearance order, appearance time, etc. of each element in the short video. placed in the draft file.
  • This method of creation requires a high degree of creative ability of the staff, and it is a time-consuming process to find materials and set the order of appearance and positional relationship of the materials. This method not only requires a lot of ability of the creators high, and it takes a lot of creative time.
  • the present invention provides a method for generating animation drafts based on text paragraphs, and the method comprises the following steps:
  • the material matching the semantics in the material library is called to generate an animation draft.
  • the user manually enters text paragraphs, such as "withered vines, old trees and crows, small bridges and flowing water people".
  • the semantic analysis of the input text paragraph is completed by natural language processing software, which can identify the part of speech of each word in the input word paragraph, and analyze the grammatical logic relationship between each word. Then match the materials in the material library according to the results of the identification analysis, and accurately set the basic animation information of each material.
  • Each asset with matching and basic animation information set is then stored in an animation draft file.
  • the step of analyzing the semantics of the text paragraph includes:
  • the technical effect shows that the semantic analysis of the input text paragraph is completed by natural language processing software, which can analyze the part of speech of each word in the input text paragraph, and analyze the grammar between each word. Logical relationship, and then match the materials in the material library according to the results of the identification analysis, and accurately set the basic animation information of each material. Then, each material that matches and sets the basic animation information is stored in the animation draft file to complete the calculation of the short video story line, and then prepares for the short video playback.
  • the parts of speech include nouns, pronouns, quantifiers, prepositions and verbs; the nouns, pronouns and quantifiers are used to match the material; the prepositions and the verbs are used to configure the material.
  • the selection of elements appearing in the story line is determined according to the nouns, pronouns, prepositions and quantifiers in the text paragraph input by the user, and the appearance order, appearance time and element itself of each element in the story line.
  • the state is determined based on the verbs and adjectives of the text passages entered by the user.
  • the grammatical structure includes: a main system table structure, a subject-predicate structure, a verb-object structure, and the like.
  • the technical effect shows that the grammatical structure of the input text paragraph can be various, such as object preposition, subject-verb inversion, adverbial post-position, etc., as long as it conforms to normal language logic, the present invention can convert the input text paragraph into hand-drawn Draft, for playback of hand-drawn videos.
  • An animation draft is generated based on the first material and the second material.
  • the selecting the first material through the constituent words in the first part-of-speech group includes:
  • the semantics of the constituent words in the first part-of-speech group are acquired, and the first material is selected based on the semantics of the constituent words.
  • the selecting the second material through the constituent words in the second part-of-speech group includes:
  • the semantics of the constituent words in the second part-of-speech group are acquired, and the second material is selected based on the semantics of the constituent words.
  • a short video project file is preconfigured, and the short video project file includes a first material group and a second material group;
  • the present invention also provides an application device for short videos based on text paragraphs, including:
  • the text acquisition module is used to acquire the selected text paragraph
  • a semantic analysis module for analyzing the semantics of the text paragraph
  • the retrieval module is configured to retrieve the material matching the semantics in the material library according to the semantics of the text paragraphs, so as to generate an animation draft.
  • the semantic analysis module includes:
  • a vocabulary parsing unit used for parsing the constituent vocabulary of the text paragraph to obtain the part of speech of the constituent vocabulary
  • a grammar parsing unit for parsing the grammatical structure of the text paragraph
  • a semantic analysis unit configured to analyze the semantics of the acquired text paragraph according to the grammatical structure and the part of speech.
  • the parts of speech include: nouns, pronouns, quantifiers, prepositions, and verbs; the nouns, pronouns, and quantifiers are used to match the material; the preposition and the verb are used to configure the material.
  • the grammatical analysis unit includes: main system table structure, subject-predicate structure, and verb-object structure.
  • a word segmentation module configured to divide a plurality of the constituent words into a first part-of-speech group and a second part-of-speech group based on the part of speech;
  • the first selection module is used to select the first material through the constituent vocabulary in the first part-of-speech group
  • the second selection module is used to select the second material through the constituent vocabulary in the second part-of-speech group
  • a generating module configured to generate an animation draft based on the first material and the second material.
  • the first selection module is further configured to perform the following steps, including:
  • the semantics of the constituent words in the first part-of-speech group are acquired, and the first material is selected based on the semantics of the constituent words.
  • the second selection module is further configured to perform the following steps, including:
  • the semantics of the constituent words in the second part-of-speech group are acquired, and the second material is selected based on the semantics of the constituent words.
  • a configuration module configured to preconfigure a short video project file, where the short video project file includes a first material group and a second material group;
  • a first obtaining unit configured to traverse the first material group based on the semantics of the constituent words in the first part-of-speech group to obtain the corresponding first material
  • the second obtaining unit is configured to traverse the second material group based on the semantics of the constituent words in the second part-of-speech group to obtain the corresponding second material.
  • the generating module is further configured to perform the following steps, including:
  • the present invention also provides an electronic device, comprising a memory and a processor, wherein the memory stores a computer program, and the computer program is executed in the processor to implement any one of the above methods.
  • the electronic device is a mobile terminal or a web terminal.
  • the present invention also provides a storage medium storing a computer program, and the computer program can implement any of the above methods when executed in a processor.
  • Fig. 1 is a kind of flow chart based on the generation method of the animation draft based on a kind of text paragraph;
  • FIG. 3 is a method flowchart of a method for matching and configuring materials according to part of speech provided by an embodiment
  • FIG. 4 is an apparatus architecture diagram of a method for syntax analysis provided by an embodiment
  • FIG. 5 is a device architecture diagram of a text paragraph-based short video application device provided by an embodiment
  • FIG. 6 is a device architecture diagram of a semantic analysis module provided by an embodiment.
  • the term “storage medium” may be various media that can store computer programs, such as ROM, RAM, magnetic disk or optical disk.
  • the term "processor” can be CPLD (Complex Programmable Logic Device: Complex Programmable Logic Device), FPGA (Field-Programmable Gate Array: Field Programmable Gate Array), MCU (Microcontroller Unit: Micro Control Unit), PLC (Programmable Logic) Controller: programmable logic controller) and CPU (Central Processing Unit: central processing unit) and other chips or circuits with data processing functions.
  • electronic device may be any device with data processing and storage functions, and may generally include both stationary and mobile terminals. Fixed terminals such as desktops, etc. Mobile terminals such as mobile phones, PADs and mobile robots. In addition, the technical features involved in the different embodiments of the present invention described later can be combined with each other as long as there is no conflict with each other.
  • the present embodiment provides a method for generating an animation draft based on a text paragraph, comprising the following steps:
  • the user manually inputs text paragraphs, such as "withered vines, old trees and crows, small bridges and flowing water people".
  • the semantic analysis of the input text paragraph is completed by natural language processing software (NLP), which can identify the part of speech of each word in the input text paragraph, and analyze the grammatical logic between each word. relation. Then match the materials in the material library according to the results of the identification analysis, and accurately set the basic animation information of each material.
  • NLP natural language processing software
  • the present embodiment provides a method for analyzing the semantics of text paragraphs, including the steps:
  • NLP natural language processing software
  • this embodiment provides a method for matching and configuring materials according to part of speech, including steps:
  • the selection of elements appearing in the story line is determined according to the nouns, pronouns, prepositions and quantifiers in the text paragraph input by the user, and the appearance order, appearance time and element itself of each element in the story line.
  • the state is determined based on the verbs and adjectives of the text passages entered by the user.
  • the present embodiment provides a method for syntax analysis, including the steps:
  • the technical effect shows that the grammatical structure of the input text paragraph can be various, such as object preposition, subject-verb inversion, adverbial post-position, etc., as long as it conforms to normal language logic, the present invention can convert the input text paragraph into hand-drawn Draft, for playback of hand-drawn videos.
  • this embodiment also provides an application device for short videos based on text paragraphs, including:
  • the text acquisition module is used to acquire the selected text paragraph
  • a semantic analysis module for analyzing the semantics of the text paragraph
  • the retrieval module is used to retrieve the material matching the semantics in the material library according to the semantics of the text paragraphs, so as to generate an animation draft;
  • this embodiment also provides a semantic analysis module, including steps:
  • a vocabulary parsing unit used for parsing the constituent vocabulary of the text paragraph to obtain the part of speech of the constituent vocabulary
  • a grammar parsing unit for parsing the grammatical structure of the text paragraph
  • a semantic analysis unit configured to analyze the semantics of the acquired text paragraph according to the grammatical structure and the part of speech.
  • the present invention further includes: dividing a plurality of the constituent words into a first part-of-speech group and a second part-of-speech group based on the part-of-speech.
  • the first part-of-speech group can be nouns, pronouns, prepositions, and quantifiers.
  • the second part-of-speech group can be verbs and adjectives.
  • the first material is selected through the constituent words in the first part-of-speech group.
  • Nouns, pronouns, prepositions, and quantifiers correspond to elements in the story line, and the first material may be an element in the story line at this time.
  • the second material is selected through the constituent words in the second part-of-speech group.
  • Verbs and adjectives form the backbone of the story line.
  • the second material can be the appearance sequence of each element, the appearance time, the switching state and the state of the element itself.
  • An animation draft is generated based on the first material and the second material. After the first material and the second material are obtained, the animation draft is generated, and the animation draft includes each element and the appearance sequence, appearance time, switching state and the state of the element itself.
  • the present invention selects the first material through the constituent words in the first part-of-speech group, including:
  • the semantics of the constituent words in the first part-of-speech group are acquired, and the first material is selected based on the semantics of the constituent words. Obtain the corresponding material types by grouping parts of speech, and then accurately locate the first material through the semantics of the constituent words.
  • the constituent vocabulary is a teacher.
  • the material type of the first material may correspond to occupation, which includes teachers and principals. , police, lawyers, etc., and then precisely positioned as teachers according to semantics.
  • the selecting the second material through the constituent words in the second part-of-speech group includes:
  • the semantics of the constituent words in the second part-of-speech group are acquired, and the second material is selected based on the semantics of the constituent words.
  • the composition of the vocabulary is a leap.
  • the corresponding material type of the first material may be an action, which includes jumping and running. , hit, leap, etc., and then precisely locate it as a leap according to semantics.
  • the present invention also includes:
  • a short video project file is preconfigured, and the short video project file includes a first material group and a second material group.
  • a material group can include multiple images, videos, hand-drawn files, and so on.
  • the first material Traversing the first material group based on the semantics of the constituent words in the first part-of-speech group to obtain the corresponding first material.
  • Each material group has multiple materials, and in the process of determining the first material, it is obtained based on the semantics of the constituent vocabulary.
  • the first material may be an image, video, hand-drawn file, etc. with related elements.
  • the second material may be the appearance order of the related elements, the appearance time, the switching state, the state of the element itself, and so on.
  • the present invention also includes:
  • a storyline is generated based on the second material.
  • the present invention generates a corresponding story line through the second material.
  • the present invention also includes:
  • a word segmentation module configured to divide the plurality of constituent words into a first part-of-speech group and a second part-of-speech group based on the part-of-speech.
  • the first part-of-speech group can be nouns, pronouns, prepositions, and quantifiers.
  • the second part-of-speech group can be verbs and adjectives.
  • the first selection module is configured to select the first material through the constituent vocabulary in the first part-of-speech group.
  • Nouns, pronouns, prepositions, and quantifiers correspond to elements in the story line, and the first material may be an element in the story line at this time.
  • the second selection module is configured to select the second material through the constituent words in the second part-of-speech group.
  • Verbs and adjectives form the backbone of the story line.
  • the second material can be the appearance sequence of each element, the appearance time, the switching state and the state of the element itself.
  • a generating module configured to generate an animation draft based on the first material and the second material. After the first material and the second material are obtained, processing is performed to generate an animation draft.
  • the animation draft includes each element and the appearance sequence, appearance time, switching state, and state of the element itself, and so on.
  • the first selection module of the present invention is also used to perform the following steps, including:
  • the semantics of the constituent words in the first part-of-speech group are acquired, and the first material is selected based on the semantics of the constituent words. Obtain the corresponding material types by grouping parts of speech, and then accurately locate the first material through the semantics of the constituent vocabulary. For example, if the constituent vocabulary is a teacher, the material type of the first material may correspond to occupation, and occupations include teachers and principals. , police, lawyers, etc., and then precisely positioned as teachers according to semantics.
  • the second selection module of the present invention is also used to perform the following steps, including:
  • the semantics of the constituent words in the second part-of-speech group are acquired, and the second material is selected based on the semantics of the constituent words.
  • the composition of the vocabulary is a leap.
  • the corresponding material type of the first material may be an action, which includes jumping and running. , hit, leap, etc., and then precisely locate it as a leap according to semantics.
  • the present invention also includes:
  • a configuration module configured to preconfigure a short video project file, where the short video project file includes a first material group and a second material group.
  • a material group can include multiple images, videos, hand-drawn files, and so on.
  • a first obtaining unit configured to traverse the first material group based on the semantics of the constituent words in the first part-of-speech group to obtain the corresponding first material.
  • Each material group has multiple materials, and in the process of determining the first material, it is obtained based on the semantics of the constituent vocabulary.
  • the first material may be an image, video, hand-drawn file, etc. with related elements.
  • the second obtaining unit is configured to traverse the second material group based on the semantics of the constituent words in the second part-of-speech group to obtain the corresponding second material.
  • Each material group has multiple materials, and in the process of determining the second material, it is obtained based on the semantics of the constituent vocabulary.
  • the second material may be the appearance order of the related elements, the appearance time, the switching state, the state of the element itself, and so on.
  • the generation module of the present invention is also used to perform the following steps, including:
  • a storyline is generated based on the second material.
  • the present invention generates a corresponding story line through the second material.
  • the present invention also provides an electronic device, comprising a memory and a processor, wherein the memory stores a computer program, and the computer program is executed in the processor to implement any one of the above methods.
  • the electronic device is a mobile terminal or a web terminal.
  • the present invention also provides a storage medium storing a computer program, and the computer program can implement any of the above methods when executed in a processor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种基于文字段落的动画草稿的生成方法和装置,属于手绘动画领域,通过获取选取的文字段落来分析所述文字段落的语义,再根据所述文字段落的语义,调取素材库中匹配所述语义的素材,以生成动画草稿。实现了降低用户创作成本,提高用户作品创作质量的技术效果。

Description

一种基于文字段落的动画草稿生成方法与装置 技术领域
本发明属于手绘视频技术领域,尤其涉及一种基于文字段落的动画草稿的生成方法、装置、电子设备和存储介质。
背景技术
传统根据文字段落来创造短视频的方法为:根据文字段落中的内容人工查找或创作相应的素材,然后自行对短视频中各个元素的种类、位置、形态、出场顺序、出场时间等进行布局规划后放置进草稿文件中。这种创作方式,对工作人员的创作能力要求很高,而且查找素材、并对素材的出场顺序和位置关系进行设置是一个相当需要花费时间的过程,这种方法不仅对创作人员的能力要求很高,而且需要花费大量的创作时间。
综上所述,传统根据文字段落来创造短视频存在人工根据文字段落的内容查找或者创作素材以进行短视频的创作,存在创作难度大耗时长的缺陷。
发明内容
为解决上述技术缺陷,本发明提供一种基于文字段落的动画草稿生成方法,该方法包括如下步骤:
获取选取的文字段落;
分析所述文字段落的语义;
根据所述文字段落的语义,调取素材库中匹配所述语义的素材,以生成动画草稿。
技术效果说明,在本发明所提供的方法中,由用户手动输入文字 段落,例如“枯藤老树昏鸦,小桥流水人家”。对所述输入的文字段落的语义分析是由自然语言处理软件完成,这种自然语言处理软件可以识别所输入的文字段落中每个词汇的词性,分析每个词汇之间的语法逻辑关系。然后根据识别分析的结果来匹配素材库中的素材,并准确设置每个素材的基础动画信息。然后将匹配并设置好基础动画信息的每个素材存储到动画草稿文件中。
具体的,分析所述文字段落的语义的步骤包括:
解析所述文字段落的组成词汇,得到所述组成词汇的词性;
解析所述文字段落的语法结构;
根据所述语法结构和所述词性,分析获取所述文字段落的语义。
技术效果说明,对所述输入的文字段落的语义分析是由自然语言处理软件完成,这种自然语言处理软件可以解析所输入的文字段落中每个词汇的词性,分析每个词汇之间的语法逻辑关系,然后根据识别分析的结果来匹配素材库中的素材,并准确设置每个素材的基础动画信息。然后将匹配并设置好基础动画信息的每个素材存储到动画草稿文件中,完成对短视频故事线的计算,进而为短视频的播放做好准备。
改进的,所述词性包括名词、代词、量词、介词以及动词;所述名词、代词以及量词用于匹配所述素材;所述介词和所述动词用于配置所述素材。
技术效果说明,对于故事线中所出现的元素的选择是根据用户输入的文字段落中的名词、代词、介词和数量词来决定的,故事线中的各个元素的出场顺序、出场时间和元素本身的状态是根据用户输入的文字段落的动词和形容词来决定的。
改进的,所述语法结构包括:主系表结构、主谓结构以及动宾结构等。
技术效果说明,输入的文字段落的的语法结构可以多种多样,例如宾语前置、主谓倒置、状语后置等等,只要符合正常语言逻辑,本发明都可以将输入的文字段落转化为手绘草稿,用于手绘视频的播放。
改进的,还包括:
基于所述词性将多个所述组成词汇分为第一词性组和第二词性组;
通过所述第一词性组内的组成词汇选取第一素材;
通过所述第二词性组内的组成词汇选取第二素材;
基于所述第一素材和第二素材生成动画草稿。
具体的,所述通过所述第一词性组内的组成词汇选取第一素材包括:
获取所述第一词性组内组成词汇的语义,基于所述组成词汇的语义选取第一素材。
具体的,所述通过所述第二词性组内的组成词汇选取第二素材包括:
获取所述第二词性组内组成词汇的语义,基于所述组成词汇的语义选取第二素材。
改进的,预先配置短视频项目文件,所述短视频项目文件内包括第一素材组和第二素材组;
基于所述第一词性组内组成词汇的语义遍历所述第一素材组获取对应的第一素材;
基于所述第二词性组内组成词汇的语义遍历所述第二素材组获取对应的第二素材。
改进的,还包括:
基于所述第二素材生成故事线;
将所述第一素材填充至所述故事线中生成动画草稿。
本发明还提供一种基于文字段落的短视频的应用装置,包括:
文字获取模块,用于获取选取的文字段落;
语义分析模块,用于分析所述文字段落的语义;
调取模块,用于根据所述文字段落的语义,调取素材库中匹配所述语义的素材,以生成动画草稿。
具体的,语义分析模块包括:
词汇解析单元,用于解析所述文字段落的组成词汇,得到所述组成词汇的词性;
语法解析单元,用于解析所述文字段落的语法结构;
语义分析单元,用于根据所述语法结构和所述词性,分析所述获取的文字段落的语义。
具体的,词性包括:名词、代词、量词、介词以及动词;所述名词、代词以及量词用于匹配所述素材;所述介词和所述动词用于配置所述素材。
具体的,语法分析单元包括:主系表结构、主谓结构以及动宾结构。
改进的,还包括:
分词模块,用于基于所述词性将多个所述组成词汇分为第一词性组和第二词性组;
第一选取模块,用于通过所述第一词性组内的组成词汇选取第一素材;
第二选取模块,用于通过所述第二词性组内的组成词汇选取第二素材;
生成模块,用于基于所述第一素材和第二素材生成动画草稿。
具体的,所述第一选取模块还用于执行以下步骤,包括:
获取所述第一词性组内组成词汇的语义,基于所述组成词汇的语义选取第一素材。
具体的,所述第二选取模块还用于执行以下步骤,包括:
获取所述第二词性组内组成词汇的语义,基于所述组成词汇的语义选取第二素材。
改进的,还包括:
配置模块,用于预先配置短视频项目文件,所述短视频项目文件内包括第一素材组和第二素材组;
第一获取单元,用于基于所述第一词性组内组成词汇的语义遍历所述第一素材组获取对应的第一素材;
第二获取单元,基于所述第二词性组内组成词汇的语义遍历所述第二素材组获取对应的第二素材。
具体的,所述生成模块还用于执行以下步骤,包括:
基于所述第二素材生成故事线;
将所述第一素材填充至所述故事线中生成动画草稿。
本发明还提供一种电子设备,包括存储器和处理器,所述存储器存储计算机程序,所述计算机程序在所述处理器中执行可实现上述任一种方法。所述电子设备为移动终端或web端。
本发明还提供一种存储介质,存储计算机程序,所述计算机程序在处理器中执行可实现上述任一种方法。
附图说明
图1一种基于一种基于文字段落的动画草稿的生成方法的流程图;
图2为一种分析文字段落的语义的方法流程图;
图3为一实施例提供的一种根据词性来匹配和配置素材的方法的方法流程图;
图4为一实施例提供的一种语法分析的方法的装置架构图;
图5为一实施例提供的种基于文字段落的短视频的应用装置的装置架构图;
图6为一实施例提供的语义分析模块的装置架构图。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,在本发明的描述中,除非另有明确的规定和限定,术语“存储介质”可以是ROM、RAM、磁碟或者光盘等各种可以存储计算机程序的介质。术语“处理器”可以是CPLD(Complex Programmable Logic Device:复杂可编程逻辑器件)、FPGA(Field-Programmable Gate Array:现场可编程门阵列)、MCU(Microcontroller Unit:微控制单元)、PLC(Programmable Logic Controller:可编程逻辑控制器)以及CPU(Central Processing Unit:中央处理器)等具备数据处理功能的芯片或电路。术语“电子设备”可以是具有数据处理功能和存储功能的任何设备,通常可以包括固定终端和移动终端。固定终端如台式机等。移动终端如手机、PAD以及移动机器人等。此外,后续所描述的本发明不同实施方式中所涉及的技术特征只要彼此之间未构成冲突就可以相互结合。
下面,本发明提出部分优选实施例以教导本领域技术人员实现。
实施例一
参见图1,本实施例提供一种基于一种基于文字段落的动画草稿 的生成方法,包括如下步骤:
S1、获取选取的文字段落;
S2、分析所述文字段落的语义;
S3、根据所述文字段落的语义,调取素材库中匹配所述语义的素材,以生成动画草稿;
S4、生成动画草稿。
技术效果说明,在本发明所提供的方法中,由用户手动输入文字段落,例如“枯藤老树昏鸦,小桥流水人家”。对所述输入的文字段落的语义分析是由自然语言处理软件(NLP)完成,这种自然语言处理软件可以识别所输入的文字段落中每个词汇的词性,分析每个词汇之间的语法逻辑关系。然后根据识别分析的结果来匹配素材库中的素材,并准确设置每个素材的基础动画信息。然后将匹配并设置好基础动画信息的每个素材存储到动画草稿文件中。
实施例二
参见图2,进一步的,本实施例提供一种分析文字段落的语义的方法,包括步骤:
F1、解析所述文字段落的组成词汇;
F2、得到所述组成词汇的词性;
F3、解析文字段落的语法结构。
F4、根据得到的文字段落的词性和语法结构,得到文字段落的语义。
技术效果说明,对所述输入的文字段落的语义分析是由自然语言处理软件(NLP)完成,这种自然语言处理软件可以解析所输入的文字段落中每个词汇的词性,分析每个词汇之间的语法逻辑关系,然后根据识别分析的结果来匹配素材库中的素材,并准确设置每个素材的 基础动画信息。然后将匹配并设置好基础动画信息的每个素材存储到动画草稿文件中,完成对短视频故事线的计算,进而为短视频的播放做好准备。
实施例三
参见图3,进一步的,本实施例提供一种根据词性来匹配和配置素材的方法,包括步骤:
S30、调取文字段落中的名次、代词和量词;
S31、将文字段落中的名次、代词和量词用于匹配素材库中的素材;
S32、调取文字段落中的介词和动词;
S33、将文字段落中的介词和动词用于配置素材库中的素材。
技术效果说明,对于故事线中所出现的元素的选择是根据用户输入的文字段落中的名词、代词、介词和数量词来决定的,故事线中的各个元素的出场顺序、出场时间和元素本身的状态是根据用户输入的文字段落的动词和形容词来决定的。
实施例四
参见图4,进一步的,本实施例提供一种语法分析的方法,包括步骤:
F30、语法分析单元;
F31、语法区分为主系表结构、主谓结构、动宾结构;
技术效果说明,输入的文字段落的的语法结构可以多种多样,例如宾语前置、主谓倒置、状语后置等等,只要符合正常语言逻辑,本发明都可以将输入的文字段落转化为手绘草稿,用于手绘视频的播放。
实施例五
参见图5,本实施例还提供一种基于文字段落的短视频的应用装置,包括:
1、文字获取模块,用于获取选取的文字段落;
2、语义分析模块,用于分析所述文字段落的语义;
3、调取模块,用于根据所述文字段落的语义,调取素材库中匹配所述语义的素材,以生成动画草稿;
4、生成动画草稿。
实施例六
参见图6,进一步的,本实施例还提供语义分析模块,包括步骤:
20、词汇解析单元,用于解析所述文字段落的组成词汇,得到所述组成词汇的词性;
21、语法解析单元,用于解析所述文字段落的语法结构;
22、语义分析单元,用于根据所述语法结构和所述词性,分析所述获取的文字段落的语义。
实施例七
本发明还包括:基于所述词性将多个所述组成词汇分为第一词性组和第二词性组。第一词性组可以是名词、代词、介词和数量词。第二词性组可以是动词、形容词。
通过所述第一词性组内的组成词汇选取第一素材。名词、代词、介词和数量词对应的是故事线中的元素,此时第一素材可以是故事线中的元素。
通过所述第二词性组内的组成词汇选取第二素材。动词、形容词构成故事线的主干,此时第二素材可以是各个元素的出场顺序、出场时间、切换状态和元素本身的状态。
基于所述第一素材和第二素材生成动画草稿。在得到第一素材和 第二素材之后进行处理,生成动画草稿,动画草稿包括各个元素以及各个元素的出场顺序、出场时间、切换状态和元素本身的状态等等。
本发明在通过所述第一词性组内的组成词汇选取第一素材中,包括:
获取所述第一词性组内组成词汇的语义,基于所述组成词汇的语义选取第一素材。通过对词性分组获取相应的素材种类,然后通过组成词汇的语义进行精准定位第一素材,例如说组成词汇是教师,此时第一素材的素材种类对应可能是职业,职业里面包括了教师、校长、警察、律师等等,然后根据语义精准定位为教师。
所述通过所述第二词性组内的组成词汇选取第二素材包括:
获取所述第二词性组内组成词汇的语义,基于所述组成词汇的语义选取第二素材。通过对词性分组获取相应的素材种类,然后通过组成词汇的语义进行精准定位第二素材,例如说组成词汇是飞跃,此时第一素材的素材种类对应可能是动作,动作里面包括了跳、跑、打、飞跃等等,然后根据语义精准定位为飞跃。
本发明还包括:
预先配置短视频项目文件,所述短视频项目文件内包括第一素材组和第二素材组。素材组可以是包括多个图像、视频、手绘文件等等。
基于所述第一词性组内组成词汇的语义遍历所述第一素材组获取对应的第一素材。其中每个素材组具有多个素材,在确定第一素材的过程中,是基于组成词汇的语义获得的。第一素材可以是具有相关元素的图像、视频、手绘文件等等。
基于所述第二词性组内组成词汇的语义遍历所述第二素材组获取对应的第二素材。其中每个素材组具有多个素材,在确定第二素材的过程中,是基于组成词汇的语义获得的。第二素材可以是相关元素 的出场顺序、出场时间、切换状态和元素本身的状态等等。
本发明还包括:
基于所述第二素材生成故事线。本发明通过第二素材生成相应的故事线。
将所述第一素材填充至所述故事线中生成动画草稿。根据生成的故事线将第一素材的元素进行填充。该种生成动画草稿的方式具有速度快、效率高的优势。
实施例七
本发明还包括:
分词模块,用于基于所述词性将多个所述组成词汇分为第一词性组和第二词性组。第一词性组可以是名词、代词、介词和数量词。第二词性组可以是动词、形容词。
第一选取模块,用于通过所述第一词性组内的组成词汇选取第一素材。名词、代词、介词和数量词对应的是故事线中的元素,此时第一素材可以是故事线中的元素。
第二选取模块,用于通过所述第二词性组内的组成词汇选取第二素材。动词、形容词构成故事线的主干,此时第二素材可以是各个元素的出场顺序、出场时间、切换状态和元素本身的状态。
生成模块,用于基于所述第一素材和第二素材生成动画草稿。在得到第一素材和第二素材之后进行处理,生成动画草稿,动画草稿包括各个元素以及各个元素的出场顺序、出场时间、切换状态和元素本身的状态等等。
本发明的第一选取模块还用于执行以下步骤,包括:
获取所述第一词性组内组成词汇的语义,基于所述组成词汇的语义选取第一素材。通过对词性分组获取相应的素材种类,然后通过组 成词汇的语义进行精准定位第一素材,例如说组成词汇是教师,此时第一素材的素材种类对应可能是职业,职业里面包括了教师、校长、警察、律师等等,然后根据语义精准定位为教师。
本发明的第二选取模块还用于执行以下步骤,包括:
获取所述第二词性组内组成词汇的语义,基于所述组成词汇的语义选取第二素材。通过对词性分组获取相应的素材种类,然后通过组成词汇的语义进行精准定位第二素材,例如说组成词汇是飞跃,此时第一素材的素材种类对应可能是动作,动作里面包括了跳、跑、打、飞跃等等,然后根据语义精准定位为飞跃。
本发明还包括:
配置模块,用于预先配置短视频项目文件,所述短视频项目文件内包括第一素材组和第二素材组。素材组可以是包括多个图像、视频、手绘文件等等。
第一获取单元,用于基于所述第一词性组内组成词汇的语义遍历所述第一素材组获取对应的第一素材。其中每个素材组具有多个素材,在确定第一素材的过程中,是基于组成词汇的语义获得的。第一素材可以是具有相关元素的图像、视频、手绘文件等等。
第二获取单元,基于所述第二词性组内组成词汇的语义遍历所述第二素材组获取对应的第二素材。其中每个素材组具有多个素材,在确定第二素材的过程中,是基于组成词汇的语义获得的。第二素材可以是相关元素的出场顺序、出场时间、切换状态和元素本身的状态等等。
本发明的生成模块还用于执行以下步骤,包括:
基于所述第二素材生成故事线。本发明通过第二素材生成相应的故事线。
将所述第一素材填充至所述故事线中生成动画草稿。该种生成动画草稿的方式具有速度快、效率高的优势。
本发明还提供一种电子设备,包括存储器和处理器,所述存储器存储计算机程序,所述计算机程序在所述处理器中执行可实现上述任一种方法。所述电子设备为移动终端或web端。
本发明还提供一种存储介质,存储计算机程序,所述计算机程序在处理器中执行可实现上述任一种方法。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。

Claims (20)

  1. 一种基于文字段落的动画草稿生成方法,其特征在于,包括步骤:
    获取选取的文字段落;
    分析所述文字段落的语义;
    根据所述文字段落的语义,调取素材库中匹配所述语义的素材,以生成动画草稿。
  2. 如权利要求1所述的方法,其特征在于,所述的分析所述文字段落的语义的步骤包括:
    解析所述文字段落的组成词汇,得到所述组成词汇的词性;
    解析所述文字段落的语法结构;
    根据所述语法结构和所述词性,分析获取所述文字段落的语义。
  3. 如权利要求2所述的方法,其特征在于,所述词性包括名词、代词、量词、介词以及动词;所述名词、代词以及量词用于匹配所述素材;所述介词和所述动词用于配置所述素材。
  4. 如权利要求2所述的方法,其特征在于,所述语法结构包括:主系表结构、主谓结构以及动宾结构。
  5. 一种基于文字段落的动画草稿生成装置,其特征在于,包括:
    文字获取模块,用于获取选取的文字段落;
    语义分析模块,用于分析所述文字段落的语义;
    调取模块,用于根据所述文字段落的语义,调取素材库中匹配所述语义的素材,以生成动画草稿。
  6. 如权利要求5所述的装置,其特征在于,所述语义分析模块包括:
    词汇解析单元,用于解析所述文字段落的组成词汇,得到所述组成词汇的词性;
    语法解析单元,用于解析所述文字段落的语法结构;
    语义分析单元,用于根据所述语法结构和所述词性,分析所述获取的文字段落的语义。
  7. 如权利要求6所述的装置,其特征在于,所述词性包括:名词、代词、量词、介词以及动词;所述名词、代词以及量词用于匹配所述素材;所述介词和所述动词用于配置所述素材。
  8. 如权利要求6所述的装置,其特征在于,所述语法分析单元包括:主系表结构、主谓结构以及动宾结构。
  9. 如权利要求2所述的方法,其特征在于,还包括:
    基于所述词性将多个所述组成词汇分为第一词性组和第二词性组;
    通过所述第一词性组内的组成词汇选取第一素材;
    通过所述第二词性组内的组成词汇选取第二素材;
    基于所述第一素材和第二素材生成动画草稿。
  10. 如权利要求9所述的方法,其特征在于,
    所述通过所述第一词性组内的组成词汇选取第一素材包括:
    获取所述第一词性组内组成词汇的语义,基于所述组成词汇的语义选取第一素材。
  11. 如权利要求10所述的方法,其特征在于,
    所述通过所述第二词性组内的组成词汇选取第二素材包括:
    获取所述第二词性组内组成词汇的语义,基于所述组成词汇的语义选取第二素材。
  12. 如权利要求11所述的方法,其特征在于,还包括:
    预先配置短视频项目文件,所述短视频项目文件内包括第一素材组和第二素材组;
    基于所述第一词性组内组成词汇的语义遍历所述第一素材组获取对应的第一素材;
    基于所述第二词性组内组成词汇的语义遍历所述第二素材组获取对应的第二素材。
  13. 如权利要求12所述的方法,其特征在于,还包括:
    基于所述第二素材生成故事线;
    将所述第一素材填充至所述故事线中生成动画草稿。
  14. 如权利要求5所述的装置,其特征在于,还包括:
    分词模块,用于基于所述词性将多个所述组成词汇分为第一词性组和第二词性组;
    第一选取模块,用于通过所述第一词性组内的组成词汇选取第一素材;
    第二选取模块,用于通过所述第二词性组内的组成词汇选取第二素材;
    生成模块,用于基于所述第一素材和第二素材生成动画草稿。
  15. 如权利要求14所述的装置,其特征在于,
    所述第一选取模块还用于执行以下步骤,包括:
    获取所述第一词性组内组成词汇的语义,基于所述组成词汇的语义选取第一素材。
  16. 如权利要求15所述的装置,其特征在于,
    所述第二选取模块还用于执行以下步骤,包括:
    获取所述第二词性组内组成词汇的语义,基于所述组成词汇的语义选取第二素材。
  17. 如权利要求16所述的装置,其特征在于,还包括:
    配置模块,用于预先配置短视频项目文件,所述短视频项目文件 内包括第一素材组和第二素材组;
    第一获取单元,用于基于所述第一词性组内组成词汇的语义遍历所述第一素材组获取对应的第一素材;
    第二获取单元,基于所述第二词性组内组成词汇的语义遍历所述第二素材组获取对应的第二素材。
  18. 如权利要求17所述的装置,其特征在于,所述生成模块还用于执行以下步骤,包括:
    基于所述第二素材生成故事线;
    将所述第一素材填充至所述故事线中生成动画草稿。
  19. 一种电子设备,包括存储器和处理器,所述存储器存储计算机程序,其特征在于,所述计算机程序在所述处理器中执行可实现权利要求1-4、9-13中任一种方法。
  20. 一种存储介质,存储计算机程序,其特征在于,所述计算机程序在处理器中执行可实现权利要求1-4、9-13中任一种方法。
PCT/CN2021/098990 2020-10-20 2021-06-08 一种基于文字段落的动画草稿生成方法与装置 WO2022083132A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011126969.5A CN112270197A (zh) 2020-10-20 2020-10-20 一种基于文字段落的动画草稿生成方法与装置
CN202011126969.5 2020-10-20

Publications (1)

Publication Number Publication Date
WO2022083132A1 true WO2022083132A1 (zh) 2022-04-28

Family

ID=74341593

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/098990 WO2022083132A1 (zh) 2020-10-20 2021-06-08 一种基于文字段落的动画草稿生成方法与装置

Country Status (2)

Country Link
CN (1) CN112270197A (zh)
WO (1) WO2022083132A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112270197A (zh) * 2020-10-20 2021-01-26 深圳市前海手绘科技文化有限公司 一种基于文字段落的动画草稿生成方法与装置
CN113269855A (zh) * 2021-06-08 2021-08-17 哈尔滨森美朴科技发展有限责任公司 一种文字语义转场景动画的方法、设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1447265A (zh) * 2002-03-25 2003-10-08 白涛 将文字型语句转变为相应的动画卡通的基本方法
US20160042058A1 (en) * 2014-08-08 2016-02-11 Cuong Duc Nguyen Processing Natural-Language Documents and Queries
CN106294666A (zh) * 2016-08-04 2017-01-04 上海汽笛生网络科技有限公司 一种实现文本形象化动态展示的方法
CN112270197A (zh) * 2020-10-20 2021-01-26 深圳市前海手绘科技文化有限公司 一种基于文字段落的动画草稿生成方法与装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101348282B1 (ko) * 2011-11-23 2014-01-10 동국대학교 산학협력단 텍스트로부터 애니메이션을 생성하는 방법 및 장치

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1447265A (zh) * 2002-03-25 2003-10-08 白涛 将文字型语句转变为相应的动画卡通的基本方法
US20160042058A1 (en) * 2014-08-08 2016-02-11 Cuong Duc Nguyen Processing Natural-Language Documents and Queries
CN106294666A (zh) * 2016-08-04 2017-01-04 上海汽笛生网络科技有限公司 一种实现文本形象化动态展示的方法
CN112270197A (zh) * 2020-10-20 2021-01-26 深圳市前海手绘科技文化有限公司 一种基于文字段落的动画草稿生成方法与装置

Also Published As

Publication number Publication date
CN112270197A (zh) 2021-01-26

Similar Documents

Publication Publication Date Title
US20210103607A1 (en) Generating three-dimensional scenes from natural language requests
CN107798123B (zh) 知识库及其建立、修改、智能问答方法、装置及设备
US10860797B2 (en) Generating summaries and insights from meeting recordings
JP4215792B2 (ja) 会議支援装置、会議支援方法および会議支援プログラム
WO2022083132A1 (zh) 一种基于文字段落的动画草稿生成方法与装置
US10942953B2 (en) Generating summaries and insights from meeting recordings
CN106610990B (zh) 情感倾向性分析的方法及装置
CN110377745B (zh) 信息处理方法、信息检索方法、装置及服务器
WO2024103609A1 (zh) 一种对话模型的训练方法及装置、对话响应方法及装置
CN108563731A (zh) 一种情感分类方法及装置
US11361759B2 (en) Methods and systems for automatic generation and convergence of keywords and/or keyphrases from a media
CN110489559A (zh) 一种文本分类方法、装置及存储介质
CN113360598A (zh) 基于人工智能的匹配方法、装置、电子设备及存储介质
CN110889266A (zh) 一种会议记录整合方法和装置
JP6885506B2 (ja) 応答処理プログラム、応答処理方法、応答処理装置および応答処理システム
US20220188525A1 (en) Dynamic, real-time collaboration enhancement
WO2023169301A1 (zh) 一种文本处理方法、装置及电子设备
CN109977197B (zh) 一种电子习题的处理方法、装置、设备和存储介质
EP4187463A1 (en) An artificial intelligence powered digital meeting assistant
CN110147358B (zh) 自动问答知识库的建设方法及建设系统
Asadi et al. Quester: A Speech-Based Question Answering Support System for Oral Presentations
CN114417827A (zh) 文本上下文处理方法、装置、电子设备和存储介质
CN110532391A (zh) 一种文本词性标注的方法及装置
CN110750989A (zh) 一种语句分析的方法及装置
CN109947908A (zh) 机器人知识库的建设方法及建设系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21881555

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 14/06/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21881555

Country of ref document: EP

Kind code of ref document: A1