CN114707019A - Information processing method and device for reading - Google Patents

Information processing method and device for reading Download PDF

Info

Publication number
CN114707019A
CN114707019A CN202210325465.9A CN202210325465A CN114707019A CN 114707019 A CN114707019 A CN 114707019A CN 202210325465 A CN202210325465 A CN 202210325465A CN 114707019 A CN114707019 A CN 114707019A
Authority
CN
China
Prior art keywords
semantic information
video
model
information
semantic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210325465.9A
Other languages
Chinese (zh)
Inventor
张量
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Hugo Online Technology Co ltd
Original Assignee
Beijing Hugo Online Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Hugo Online Technology Co ltd filed Critical Beijing Hugo Online Technology Co ltd
Priority to CN202210325465.9A priority Critical patent/CN114707019A/en
Publication of CN114707019A publication Critical patent/CN114707019A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • G06F16/739Presentation of query results in form of a video summary, e.g. the video summary being a video sequence, a composite still image or having synthesized frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure discloses an information processing method and device for reading, wherein the method comprises the following steps: after the text content of the document to be read is obtained, performing semantic recognition on the text content to obtain semantic information; determining a set of target 3D models matched with the semantic information from a pre-established model library based on the semantic information; and establishing association among a group of target 3D models to obtain video content corresponding to the text content. By changing the text reading mode into the video reading mode, the reading rate and the reading efficiency are improved, and the technical problem of low reading rate and reading efficiency in the related technology is solved.

Description

Information processing method and device for reading
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to an information processing method and apparatus for reading.
Background
Existing books are usually displayed in the form of characters.
In the related art, a reader is required to have certain comprehension capacity through a text reading mode, the reading requirement level is high, a large amount of time is required to be invested in the text reading mode, and the reading efficiency is not high enough.
Disclosure of Invention
The main purpose of the present disclosure is to provide an information processing method and apparatus for reading.
In order to achieve the above object, according to a first aspect of the present disclosure, there is provided an information processing method for reading, comprising: after the text content of the document to be read is obtained, performing semantic recognition on the text content to obtain semantic information; determining a set of target 3D models matched with the semantic information from a pre-established model library based on the semantic information; and establishing association between the group of target 3D models to obtain video content corresponding to the text content.
Optionally, the method further comprises: matching video segments conforming to the semantic information from a library storing video materials based on the semantic information; if there is a video segment for which the semantic information does not match, a set of target 3D models matching the semantic information is determined from a pre-established model library based on the semantic information.
Optionally, determining a set of target 3D models from a pre-established model library that match the semantic information based on the semantic information comprises: calculating a numerical value for representing correlation between the semantic information and preset information of each 3D model in the model library; and determining the 3D model corresponding to the numerical value in a preset confidence interval as a matching model, wherein the semantic information corresponds to each 3D model with the preset confidence interval.
Optionally, the method further comprises: generating an explanation text conforming to the text content based on the semantic information; and matching the text for explanation with the video content to obtain the audio content corresponding to the text content.
Optionally, the method further comprises: the video content is ordered in chapter order.
According to a second aspect of the present disclosure, there is provided an information processing apparatus for reading, comprising: the semantic recognition unit is configured to perform semantic recognition on the text content of the document to be read after the text content is obtained, so as to obtain semantic information; a matching unit configured to determine a set of target 3D models matching the set of recognition objects from a pre-established model library based on the semantic information; and the video establishing unit is configured to establish association between the group of target 3D models to obtain video content corresponding to the text content.
Optionally, the apparatus further comprises: a video matching unit configured to match a video clip conforming to the semantic information from a library storing video materials based on the semantic information; if there is a matching video segment for which the semantic information does not match, a set of target 3D models that match the set of recognition objects is determined from a pre-built model library based on the semantic information.
Optionally, the matching unit is further configured to: calculating a numerical value for representing correlation between the semantic information and preset information of each 3D model in the model library; and determining the 3D model corresponding to the numerical value in a preset confidence interval as a matching model, wherein the semantic information corresponds to each 3D model with the preset confidence interval.
According to a third aspect of the present disclosure, there is provided a computer-readable storage medium storing computer instructions for causing a computer to execute the information processing method for reading according to any one of the implementation manners of the first aspect.
An electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to cause the at least one processor to perform the information processing method for reading according to any one of the implementations of the first aspect.
The information processing method and device for reading in the embodiment of the disclosure comprise the steps of after obtaining text content of a document to be read, performing semantic recognition on the text content to obtain semantic information; determining a set of target 3D models matched with the semantic information from a pre-established model library based on the semantic information; and establishing association among a group of target 3D models to obtain video content corresponding to the text content. By changing the text reading mode into the video reading mode, the reading rate and the reading efficiency are improved, and the technical problem of low reading rate and reading efficiency in the related technology is solved.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flow chart of an information processing method for reading according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an electronic device according to an embodiment of the present disclosure;
fig. 3 is an application scenario diagram of an information processing method for reading according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those skilled in the art, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only some embodiments of the present disclosure, not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the present disclosure may be described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
According to an embodiment of the present disclosure, there is provided an information processing method for reading, as shown in fig. 1, the method including steps 101 to 103 as follows:
step 101: after the text content of the document to be read is obtained, semantic recognition is carried out on the text content to obtain semantic information.
In this embodiment, the document to be read may be an electronic document or a document in a paper form, and when the document to be read is a document in a paper form, the document to be read may be converted into the electronic document to be read by a scanning device, for example, the scanning device may be a point reading device or a scanning device.
The semantic recognition is carried out on the text content of the document to be read, and the semantic of the text can be obtained by recognizing through a pre-established semantic recognition model. The semantics of the text may include semantics recognized in a paragraph unit, or semantics recognized in a sentence unit, and in order to improve the accuracy of semantic recognition and improve the matching degree between the generated video and the text content, the semantics may also be obtained by recognition of a combination of sentences and paragraphs.
The semantic information may be used to match the 3D model and or the video clip. One semantic information may comprise one or more recognition objects, which may also be used for matching the 3D model, and/or a video segment, which recognition objects may be text-described objects.
When the semantic recognition is carried out, a syntactic structure, a topic model, a knowledge graph and word vector analysis of the syntactic level can be built on the basis of word-level part-of-speech recognition, proper nouns, word importance, synonyms and the like, and the semantic recognition is finally completed.
For example, "if you stroll by 200 ten thousand years ago, you are likely to see a group of creatures that are very similar to humans: some mothers can put the baby in the mother while catching the baby, and the baby can be caught by the mother and get a group; some young people are dissatisfied with a variety of norms in society; some old people who hang old only want the picture to be quiet; a male with muscle hammers his chest, and only the next beauty can be expected to be green; the old people full of wisdom are common for all the time. "semantic recognition of paragraphs by a semantic analysis may result in a scene of human life in africa, and if matching to a model or video clip associated therewith based on the semantic analysis, the accuracy of the match may not be high. For example, although matching to a scene living with african, none of the details described in the text are present.
In order to improve the matching degree between the video and the text, semantic recognition can be performed by taking sentences as units to obtain semantic information. For example, "some mothers can get their babies while catching them, and get their children catching them too much, and get a clique, the semantics obtained can be that the mothers can get their babies while catching them, and get their children catching them too much; through the semantic analysis, the information that the African mother puts the baby in jeopardy and gets back the child can be obtained. For another example, "a muscle mann hammers the chest of the man and only wants the next beauty to be green", and the african muscle mann recognized by semantic recognition performs a beating action toward the next beauty.
More accurate semantics can be obtained by combining semantic analysis of paragraphs and sentences.
Further, in order to improve the matching precision of the text content of the document and the model and/or the video clip, when performing semantic recognition, key information (such as keywords and/or words), emotion analysis, forbidden word analysis, expression analysis related to the text, and positive and negative analysis can be combined to improve the recognition precision.
For example, emotion analysis is used, semantic recognition accuracy can be improved through emotion analysis, for example, "mom laughs children", wherein emotion is positive, and a model or a video clip of mom holding children can be matched based on the emotion analysis, but if emotion is negative, a model or a video clip of mom beating children can be matched based on the emotion analysis.
For example, the semantic recognition accuracy can be further improved by combining the recognized semantics on the basis of the key words (and/or) such as key words, for example, the semantic obtained by "some mothers can ask the baby while simultaneously holding the baby, and get the child into a group after getting the group" is that african mothers ask the baby while simultaneously holding the baby, and the precise semantic information can be obtained by combining the key words with "mothers", "ask" and "grab".
Step 102: based on the semantic information, a set of target 3D models matching the semantic information is determined from a pre-established model library.
In this embodiment, the 3D model may be a 3D animated model, which may be stored in a library of 3D animated models. An animation model library may be established in advance, and 3D modeling may be performed on various scenes (including, but not limited to, scenes related to relationships between characters, natural environments in different geographical locations, scenes between historical characters, war scenes, and the like, and any scenes occurring in real life) and objects (including, but not limited to, objective objects) to obtain a 3D animation model, for example, a scene in which a mother holds a child, a mountain model, a sea model, a lawn model, and the like.
Further, each 3D model may have associated therewith description information, which may be description information for describing the 3D model, including but not limited to scene description information expressed by the model, description information of what the model represents. When matching the semantic information with the 3D model, matching can be performed through the semantic information and the description information associated with the 3D model. And the description information with the maximum correlation with the semantic information, and the corresponding 3D model can be determined as the target 3D model. The plurality of semantic information may be matched to a plurality of target 3D models to obtain a set of target 3D models.
As an optional implementation manner of this embodiment, determining, based on the semantic information, a set of target 3D models that match the semantic information from a pre-established model library includes: calculating a numerical value for representing correlation between the semantic information and preset information of each 3D model in the model library; and determining the 3D model corresponding to the numerical value in a preset confidence interval as a matching model, wherein the semantic information corresponds to each 3D model with the preset confidence interval.
In this alternative implementation, the training data may be trained in advance to obtain the confidence interval. The confidence interval may be adjusted as required.
Illustratively, the training data may be preprocessed (typically manually labeled) and then have relatively accurate feature descriptions, and participate in the model development work in the form of "samples". And (4) training the algorithm according to the training data, calculating to obtain each parameter of the algorithm, and then applying the algorithm to the real data to obtain a confidence interval.
Step 103: and establishing association between the group of target 3D models to obtain video content corresponding to the text content.
In this embodiment, if the semantic information can be matched with a target 3D model covering all granularity semantic information (i.e., an identification object) in the semantic information, the model can be directly used as the content of the video, for example, a 3D model matching a scene where "a mother puts a baby while holding the baby and holds another baby back" with "the mother puts a baby while holding another baby" can be directly used as the content of the video.
If the semantic information cannot be matched with a target 3D model covering all granularity semantic information (namely, an identification object) in the semantic information, the semantic information with different granularities contained in the semantic information can be respectively matched to obtain a group of target 3D models. For example, a 3D model related to 'mom baby-catching' and a 3D model related to 'mom baby-catching' are associated to obtain video content. After a set of target 3D models is obtained, relationships between the models may be established, including, but not limited to, establishing positional relationships between the models, relationships between action logic, and the like. The establishment of the relationship between the models can be accomplished by the associated animation software.
According to the embodiment, through automatic conversion from 'books' to 'videos', the book content is converted into the videos, the books are displayed in a video mode, the book content is sequentially unfolded according to each chapter in an imaging and animation mode, and the content is displayed in a vivid and interesting video mode. The reader can completely acquire the information and knowledge to be transmitted by the book only by watching the video, the content is clear and intuitive, and the reader can easily understand the information and knowledge. Thereby improving the reading rate and the reading efficiency.
As an optional implementation manner of this embodiment, the method further includes: matching video segments conforming to the semantic information from a library storing video materials based on the semantic information; if there is a matching video segment for which the semantic information does not match, a set of target 3D models that match the set of recognition objects is determined from a pre-built model library based on the semantic information.
In the optional implementation manner, in order to improve the video production efficiency, a video material library may be established in advance, and video segments related to the semantic information are matched in the video material library based on the semantic information, and the video segments may be used as final video content.
When video clips are matched, matching can be performed based on preset information corresponding to the video clips, and the preset information can be lines, labels, descriptions of videos, and/or the like. For example, a first numerical value for representing relevance may be determined based on the semantic information and preset information of the stored video material when matching; and determining the video material corresponding to the numerical value in a preset first confidence interval as a matched video material, wherein the semantic information corresponds to the preset confidence interval with each video material.
Further, if the video segment is not matched, a 3D model can be matched based on the semantic information to be matched, and video content can be generated based on the 3D model.
Further, if the matched video segment cannot cover all semantic information, 3D model matching can be performed for the semantic information that is not matched in the semantic information. And then, the video clips and the 3D model can be associated to form video content.
Still further, the step of determining the video segment may also be after the step of matching the 3D model, i.e. if no 3D model is matched, the matching of the video segment may continue; or if the matched 3D model can not cover all semantic information, matching the unmatched semantic information with the video clip, and then establishing association between the 3D model and the video clip to obtain the video content.
As an optional implementation manner of this embodiment, the method further includes: generating an explanation text conforming to the text content based on the semantic information; and matching the text for explanation with the video content to obtain the audio content corresponding to the text content.
In this alternative implementation, narrative text may be provided in the video to assist the user in further understanding the text. The text for explanation can be obtained by extracting key words and core logic expressed by the text by semantic analysis. The text can then be converted to speech data and the speech data can be matched to video content to obtain audio content.
As an alternative implementation manner of this embodiment, the video content is sorted according to the chapter order.
In the optional implementation manner, after the video content is intelligently synthesized, the video content can be played according to the document chapters by sequencing the presentation sequence of the video content, so that the video content is more beneficial to reading with readers.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
According to an embodiment of the present disclosure, there is also provided an apparatus for implementing the information processing method for reading, the apparatus including: the semantic recognition unit is configured to perform semantic recognition on the text content after the text content of the document to be read is acquired, so that semantic information is obtained; a matching unit configured to determine a set of target 3D models matching the set of recognition objects from a pre-established model base based on the semantic information; and the video establishing unit is configured to establish association between the group of target 3D models to obtain video content corresponding to the text content.
As an optional implementation manner of this embodiment, the apparatus further includes: a video matching unit configured to match a video clip conforming to the semantic information from a library storing video materials based on the semantic information; if there is a video segment for which the semantic information does not match, a set of target 3D models matching the semantic information is determined from a pre-established model library based on the semantic information.
The matching unit is further configured to: calculating a numerical value for representing correlation between the semantic information and preset information of each 3D model in the model library; and determining the 3D model corresponding to the numerical value in a preset confidence interval as a matching model, wherein the semantic information corresponds to each 3D model with the preset confidence interval.
The embodiment realizes that the related video content is searched and extracted from the video library according to semantic analysis; building an animation model and an interaction process according to semantic analysis; presenting book content by chapter in a video manner; the book carrier is video and is transmitted in the form of electronic files or streaming media.
The embodiment automatically converts the text content of the book into the video, and stores, records and transmits the book content in the form of the multimedia video. Each chapter of a video book (video content) is a relatively independent video, and book information is transmitted to a user through animation, sound and shooting content in the playing process. The video content generated by the video reading system comprises a catalogue and various chapters, and all parts are arranged in order.
The embodiment completes the conversion from the book to the video by a systematic automatic process and by utilizing massive video clips and animation models which are shot by human beings. The threshold for reading books is further reduced, and the book content is visually presented to the user through animation and video of vivid images. Various profound concepts and complex logic can be illustrated by video. The user can learn new knowledge and master new skills conveniently.
The embodiment of the present disclosure provides an electronic device, as shown in fig. 2, the electronic device includes one or more processors 21 and a memory 22, where one processor 21 is taken as an example in fig. 2.
The controller may further include: an input device 23 and an output device 24.
The processor 21, the memory 22, the input device 23 and the output device 24 may be connected by a bus or other means, and fig. 2 illustrates the connection by a bus as an example.
The processor 21 may be a Central Processing Unit (CPU). The processor 21 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or combinations thereof. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 22, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the control methods in the embodiments of the present disclosure. The processor 21 executes various functional applications of the server and data processing by running non-transitory software programs, instructions and modules stored in the memory 22, i.e. implements the method of the above-described method embodiment.
The memory 22 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of a processing device operated by the server, and the like. Further, the memory 22 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 22 may optionally include memory located remotely from the processor 21, which may be connected to a network connection device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 23 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the processing device of the server. The output device 24 may include a display device such as a display screen.
One or more modules are stored in the memory 22, which when executed by the one or more processors 21 perform the method as shown in fig. 1.
Referring to fig. 3, fig. 3 shows a system architecture of an information processing method for reading, after semantic analysis is performed on the content of an original book, a system for reading a book by a video performs model building in combination with an animation model library and/or extracts video materials through a video material library, and finally intelligently synthesizes video content available for reading.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by a computer program that can be stored in a computer-readable storage medium and that can be executed by a computer to instruct related hardware, where the computer program can include the processes of the embodiments of the motor control methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-only memory (ROM), a Random Access Memory (RAM), a flash memory (FlashMemory), a hard disk (hard disk drive, abbreviated as HDD) or a Solid State Drive (SSD), etc.; the storage medium may also comprise a combination of memories of the kind described above.
Although the embodiments of the present disclosure have been described in conjunction with the accompanying drawings, those skilled in the art can make various modifications and variations without departing from the spirit and scope of the present disclosure, and such modifications and variations fall within the scope defined by the appended claims.

Claims (10)

1. An information processing method for reading, comprising:
after the text content of the document to be read is obtained, performing semantic recognition on the text content to obtain semantic information;
determining a set of target 3D models matched with the semantic information from a pre-established model library based on the semantic information;
and establishing association between the group of target 3D models to obtain video content corresponding to the text content.
2. The information processing method for reading according to claim 1, further comprising:
matching video segments conforming to the semantic information from a library storing video materials based on the semantic information;
and if the semantic information does not match with the matched video clip, determining a group of target 3D models matched with the semantic information from a pre-established model base based on the semantic information.
3. The information processing method for reading according to claim 1, wherein determining a set of target 3D models matching the semantic information from a pre-established model library based on the semantic information comprises:
calculating a numerical value for representing correlation between the semantic information and preset information of each 3D model in the model library;
and determining the 3D model corresponding to the numerical value in the preset confidence interval as a matching model, wherein the semantic information corresponds to the preset confidence interval with each 3D model.
4. The information processing method for reading according to claim 1, further comprising:
generating an explanation text conforming to the text content based on the semantic information;
and matching the text for explanation with the video content to obtain the audio content corresponding to the text content.
5. The information processing method for reading according to claim 1, characterized in that the method further comprises:
the video content is ordered in chapter order.
6. An information processing apparatus for reading, comprising:
the semantic recognition unit is configured to perform semantic recognition on the text content of the document to be read after the text content is obtained, so as to obtain semantic information;
a matching unit configured to determine a set of target 3D models matching the set of recognition objects from a pre-established model library based on the semantic information;
and the video establishing unit is configured to establish association between the group of target 3D models to obtain video content corresponding to the text content.
7. The information processing apparatus for reading of claim 6, wherein the apparatus further comprises:
a video matching unit configured to match a video clip conforming to the semantic information from a library storing video materials based on the semantic information; if there is a matching video segment for which the semantic information does not match, a set of target 3D models matching the set of recognition objects is determined from a pre-established model library based on the semantic information.
8. The information processing apparatus for reading according to claim 6, wherein the matching unit is further configured to:
calculating a numerical value for representing correlation between the semantic information and preset information of each 3D model in the model library;
and determining the 3D model corresponding to the numerical value in a preset confidence interval as a matching model, wherein the semantic information corresponds to each 3D model with the preset confidence interval.
9. A computer-readable storage medium storing computer instructions for causing a computer to execute the information processing method for reading according to any one of claims 1 to 5.
10. An electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to cause the at least one processor to perform the information processing method for reading as claimed in any one of claims 1 to 5.
CN202210325465.9A 2022-03-29 2022-03-29 Information processing method and device for reading Pending CN114707019A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210325465.9A CN114707019A (en) 2022-03-29 2022-03-29 Information processing method and device for reading

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210325465.9A CN114707019A (en) 2022-03-29 2022-03-29 Information processing method and device for reading

Publications (1)

Publication Number Publication Date
CN114707019A true CN114707019A (en) 2022-07-05

Family

ID=82170584

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210325465.9A Pending CN114707019A (en) 2022-03-29 2022-03-29 Information processing method and device for reading

Country Status (1)

Country Link
CN (1) CN114707019A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102955848A (en) * 2012-10-29 2013-03-06 北京工商大学 Semantic-based three-dimensional model retrieval system and method
CN109756751A (en) * 2017-11-07 2019-05-14 腾讯科技(深圳)有限公司 Multimedia data processing method and device, electronic equipment, storage medium
US20200135158A1 (en) * 2017-05-02 2020-04-30 Yunjiang LOU System and Method of Reading Environment Sound Enhancement Based on Image Processing and Semantic Analysis
CN111400234A (en) * 2020-03-12 2020-07-10 深圳捷径观察科技有限公司 Multimedia reader based on VR equipment and reading method
CN112270768A (en) * 2020-11-09 2021-01-26 中山大学 Ancient book reading method and system based on virtual reality technology and construction method thereof
CN113223173A (en) * 2021-05-11 2021-08-06 华中师范大学 Three-dimensional model reconstruction migration method and system based on graph model
CN113891150A (en) * 2021-09-24 2022-01-04 北京搜狗科技发展有限公司 Video processing method, device and medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102955848A (en) * 2012-10-29 2013-03-06 北京工商大学 Semantic-based three-dimensional model retrieval system and method
US20200135158A1 (en) * 2017-05-02 2020-04-30 Yunjiang LOU System and Method of Reading Environment Sound Enhancement Based on Image Processing and Semantic Analysis
CN109756751A (en) * 2017-11-07 2019-05-14 腾讯科技(深圳)有限公司 Multimedia data processing method and device, electronic equipment, storage medium
CN111400234A (en) * 2020-03-12 2020-07-10 深圳捷径观察科技有限公司 Multimedia reader based on VR equipment and reading method
CN112270768A (en) * 2020-11-09 2021-01-26 中山大学 Ancient book reading method and system based on virtual reality technology and construction method thereof
CN113223173A (en) * 2021-05-11 2021-08-06 华中师范大学 Three-dimensional model reconstruction migration method and system based on graph model
CN113891150A (en) * 2021-09-24 2022-01-04 北京搜狗科技发展有限公司 Video processing method, device and medium

Similar Documents

Publication Publication Date Title
CN110968736B (en) Video generation method and device, electronic equipment and storage medium
CN110782900B (en) Collaborative AI storytelling
US11704501B2 (en) Providing a response in a session
CN104461525B (en) A kind of intelligent consulting platform generation system that can customize
CN115082602B (en) Method for generating digital person, training method, training device, training equipment and training medium for model
US20160004911A1 (en) Recognizing salient video events through learning-based multimodal analysis of visual features and audio-based analytics
US10157619B2 (en) Method and device for searching according to speech based on artificial intelligence
EP3239857B1 (en) A method and system for dynamically generating multimedia content file
WO2018177334A1 (en) Content explanation method and device
US20240070397A1 (en) Human-computer interaction method, apparatus and system, electronic device and computer medium
CN112104919A (en) Content title generation method, device, equipment and computer readable storage medium based on neural network
WO2018209845A1 (en) Method and apparatus for generating stories on the basis of picture content
CN111046148A (en) Intelligent interaction system and intelligent customer service robot
CN110941960A (en) Keyword-based children picture story generation method, system and equipment
CN115497448A (en) Method and device for synthesizing voice animation, electronic equipment and storage medium
CN113923521B (en) Video scripting method
CN116958342A (en) Method for generating actions of virtual image, method and device for constructing action library
CN115953521A (en) Remote digital human rendering method, device and system
CN113407766A (en) Visual animation display method and related equipment
US11727618B1 (en) Artificial intelligence-based system and method for generating animated videos from an audio segment
CN112233648A (en) Data processing method, device, equipment and storage medium combining RPA and AI
CN114707019A (en) Information processing method and device for reading
JP7427405B2 (en) Idea support system and its control method
CN115171673A (en) Role portrait based communication auxiliary method and device and storage medium
CN111931510B (en) Intention recognition method and device based on neural network and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination