CN116027946B - Picture information processing method and device in interactive novel - Google Patents

Picture information processing method and device in interactive novel Download PDF

Info

Publication number
CN116027946B
CN116027946B CN202310311208.4A CN202310311208A CN116027946B CN 116027946 B CN116027946 B CN 116027946B CN 202310311208 A CN202310311208 A CN 202310311208A CN 116027946 B CN116027946 B CN 116027946B
Authority
CN
China
Prior art keywords
target
information
vocabulary
picture
novel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310311208.4A
Other languages
Chinese (zh)
Other versions
CN116027946A (en
Inventor
王一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Renma Interactive Technology Co Ltd
Original Assignee
Shenzhen Renma Interactive Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Renma Interactive Technology Co Ltd filed Critical Shenzhen Renma Interactive Technology Co Ltd
Priority to CN202310311208.4A priority Critical patent/CN116027946B/en
Publication of CN116027946A publication Critical patent/CN116027946A/en
Application granted granted Critical
Publication of CN116027946B publication Critical patent/CN116027946B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the application discloses a picture information processing method and device in an interactive novel. According to the method, the first information content of the target novels comprising the target vocabulary is displayed on the first display interface, when the target vocabulary is played, target pictures corresponding to the target vocabulary can be obtained, and the target pictures for displaying the target vocabulary are displayed on the second display interface, so that the understanding of the user on the interaction novels is improved, and the immersion of the user when the user reads the interaction novels is improved; outputting first voice prompt information for prompting that target information related to the target vocabulary exists when sentences corresponding to the target vocabulary are played; when a first voice instruction indicating to play the target information is received, the target information is acquired and played, so that the target vocabulary can be displayed to the user more intelligently and comprehensively, and further the logic thinking capability and the language expression capability of the user are improved.

Description

Picture information processing method and device in interactive novel
Technical Field
The application relates to the technical field of general data processing in the Internet industry, in particular to a method and a device for processing picture information in an interactive novel.
Background
As applications develop, more and more users can read novels through applications on terminal devices. For example, the user may read an electronic novel in the application, or the user may hear a voiced novel in the application. However, in the process of reading the novel, if the unintelligible content or vocabulary appears, the user needs to exit the application program for reading the novel currently, and then enter other application programs to query the unintelligible content or vocabulary. The method not only occupies the resources of the terminal equipment, thereby reducing the performance of the terminal equipment, but also improves the energy consumption of the terminal equipment.
Disclosure of Invention
The embodiment of the invention provides a picture information processing method and device in an interactive novel, which can improve the performance of terminal equipment and save the energy consumption of the terminal equipment.
In a first aspect, an embodiment of the present application provides a method for processing picture information in an interactive novel, which is applied to a terminal device of an interactive novel service system, where the interactive novel service system includes a server and a terminal device, and the server is connected with the terminal device in a communication manner; the method comprises the following steps: the terminal equipment interacts with the server to display first information content of the target novel on the first display interface; the first information content comprises a target vocabulary with special marks, wherein the target vocabulary is a vocabulary except the vocabulary contained in a vocabulary library corresponding to the age information of the current use object; playing the first information content according to the text sequence corresponding to the first information content, acquiring target pictures corresponding to the target vocabulary when the target vocabulary is played, and displaying the target pictures on the second display interface; the target pictures are used for displaying target words; outputting first voice prompt information when sentences corresponding to the target vocabulary are played, wherein the first voice prompt information prompts that target information related to the target vocabulary exists; when a first voice command input for target information is received and the first voice command indicates to play the target information, the first voice command is responded, the target information is obtained through interaction with the server, and the target information is played.
Therefore, the terminal equipment can display the target picture for displaying the target vocabulary under the condition that the target vocabulary is completely played by displaying the first information content comprising the target vocabulary, so that the target vocabulary is more intelligently and comprehensively displayed, and the target information related to the target vocabulary is prompted to exist through the first voice prompt information, so that the target information is acquired and played when the first voice instruction indicates to play the target information, and the information is output more accurately and effectively, so that the performance of the terminal equipment is improved, and the energy consumption of the terminal equipment is saved.
In one implementation, when a first reading instruction input for a target novel is received, outputting second voice prompt information, wherein the second voice prompt information is used for acquiring age information of a current using object; when a second voice instruction input aiming at the second voice prompt information is received and indicates age information of a currently used object, a special mark version corresponding to a target novel is generated according to the age information of the currently used object and an original version corresponding to the target novel, and the special mark version corresponding to the target novel comprises first information content.
Therefore, the terminal equipment can generate the special mark version corresponding to the target novel according to the age information and the original version of the target novel by acquiring the age information of the currently used object when reading for the first time, so that the target vocabulary with the special mark can be associated with the target picture and the information, the target vocabulary associated with the target vocabulary and the information can be acquired when the currently used object reads the special mark version corresponding to the target novel, and the understanding of the user can be improved, and the user can understand the target vocabulary better.
In one implementation, a target picture corresponding to a target vocabulary is obtained according to an association relationship between a plurality of vocabularies and a plurality of pictures; or, calling a search engine to obtain a target picture corresponding to the target vocabulary.
Therefore, the terminal equipment obtains the target picture associated with the target vocabulary through the association relation or the search engine between the plurality of vocabularies and the plurality of pictures, and can expand the information related to the target vocabulary, so that the content displayed by the target vocabulary is enriched, the target vocabulary is displayed more intelligently and comprehensively, and the logic thinking capability and imagination of a user are improved.
In one implementation, the target information associated with the target vocabulary includes first information and/or second information, the first information being used to introduce the target vocabulary; the second information is used for introducing the target vocabulary and the vocabulary with the association relation with the target vocabulary.
Therefore, the terminal equipment carries out science popularization on the target vocabulary through the first information or the second information, and the content of the target vocabulary can be further enriched, so that the terminal equipment is beneficial to better helping a user to understand the target vocabulary in the target novel, and the capability of the user to actively think is improved.
In one implementation, when a play record of the first information exists in the play record, acquiring second information, and playing the second information; or when the play record of the first information does not exist in the play record, the first information is acquired, and the first information is played.
Therefore, the terminal equipment selects to conduct science popularization through the first information or the second information according to the play record so as to more effectively output the information which is more in line with the current use object, so that the effectiveness of science popularization is improved, and the speech expression capability and the logic thinking capability of a user are improved.
In one implementation, when a first voice command input for target information is received and the first voice command indicates that the target information is not played, sentences after sentences corresponding to the target vocabulary are played in response to the first voice command.
Therefore, when the first voice command indicates that the target information is not played, the terminal equipment continues to play the sentences after the sentences corresponding to the target vocabulary, and can more intelligently complete playing of the target novels, so that the playing efficiency is improved, and the voice expression capability of the user is improved.
In one implementation, when the target vocabulary is played, obtaining a target picture corresponding to the target vocabulary, and outputting third voice prompt information, wherein the third voice prompt information is used for indicating that the target picture exists; and when a third voice command input for the target picture is received and the third voice command indicates to display the target picture, responding to the third voice command and displaying the target picture on the second display interface.
Therefore, the terminal device can display the target picture when receiving the third voice instruction for indicating to display the target picture by outputting the third voice prompt information indicating to have the target picture, so that the interactivity of the terminal device is enhanced, the interestingness is improved, and the voice expression capability and the interactive capability of a user are further improved.
In one implementation, when the target vocabulary is played and the display frequency of the target pictures in the display record is lower than a threshold value, the target pictures corresponding to the target vocabulary are obtained, and the target pictures are displayed on the second display interface.
Therefore, the terminal device can display the target vocabulary through the target picture more effectively by acquiring and displaying the target picture when the display frequency of the target picture is lower than the threshold value, so that the target vocabulary is displayed more intelligently.
In a second aspect, an embodiment of the present application provides an apparatus for processing picture information in an interactive novel, including:
the display module is used for displaying first information content of the target novel on the first display interface, wherein the first information content comprises target words with special marks, and the target words are words except words contained in a word library corresponding to age information of a current use object;
the playing module is used for playing the first information content according to the text sequence corresponding to the first information content;
the acquisition module is used for acquiring target pictures corresponding to the target vocabulary when the target vocabulary is played; the display module is also used for displaying the target picture on the second display interface; the target pictures are used for displaying target words;
the processing module is used for outputting first voice prompt information when sentences corresponding to the target vocabulary are played, and the first voice prompt information prompts that target information related to the target vocabulary exists;
the acquisition module is also used for responding to the first voice instruction to acquire the target information when the first voice instruction input for the target information is received and the first voice instruction indicates to play the target information; and the playing module is also used for playing the target information.
In a third aspect, an embodiment of the present application provides a computer device, including a memory and a processor, where the memory stores a computer program, and when the computer program is executed by the processor, the processor is caused to execute the method for processing picture information in an interactive novel provided in the embodiment of the present application.
In a fourth aspect, embodiments of the present application provide a computer storage medium, where a computer program is stored, where the computer program includes program instructions that, when executed by a processor, perform the method for processing picture information in an interactive novel provided in the embodiments of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product or computer program, where the computer program product or computer program includes computer instructions, where the computer instructions are stored in a computer readable storage medium, and where the computer instructions, when executed by a processor of a computer device, perform a method for processing picture information in an interactive novice provided in embodiments of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a block diagram of an interactive novel picture information processing system according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a method for processing picture information in an interactive novel according to an embodiment of the present application;
fig. 3A is a schematic diagram of a desktop of a terminal device according to an embodiment of the present application;
FIG. 3B is a schematic diagram of an application B provided in an embodiment of the present application;
FIG. 3C is a schematic diagram of a first display interface according to an embodiment of the present disclosure;
FIG. 3D is a schematic diagram of another first display interface according to an embodiment of the present disclosure;
FIG. 3E is a schematic diagram of yet another first display interface provided by an embodiment of the present application;
FIG. 3F is a schematic diagram illustrating a first display interface size adjustment according to an embodiment of the present disclosure;
FIG. 3G is a schematic diagram of a second display interface according to an embodiment of the present disclosure;
FIG. 3H is a schematic diagram of a third display interface according to an embodiment of the present disclosure;
FIG. 3I is a schematic diagram of an interactive character displaying a third voice prompt according to an embodiment of the present application;
FIG. 3J is a schematic diagram of a fourth display interface according to an embodiment of the present disclosure;
fig. 3K is a schematic flow chart of an interface corresponding to a method for processing picture information in an interactive novel according to an embodiment of the present application;
Fig. 4 is a flowchart of another method for processing picture information in an interactive novel according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a picture information processing device in an interactive novel according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The embodiment of the application provides a picture information processing method in an interactive novel, which can improve the performance of terminal equipment and save the energy consumption of the terminal equipment. The method for processing the picture information in the interactive novel can be realized based on one or more technologies in artificial intelligence technology.
Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Machine Learning (ML) is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, etc. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, confidence networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like.
With research and advancement of artificial intelligence technology, research and application of artificial intelligence technology is being developed in various fields, such as common smart home, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned, automatic driving, unmanned aerial vehicles, robots, smart medical treatment, smart customer service, etc., and it is believed that with the development of technology, artificial intelligence technology will be applied in more fields and with increasing importance value.
In a possible embodiment, the method for processing the picture information in the interaction novel provided by the embodiment of the application can be further implemented based on Cloud technology (Cloud technology). In particular, it may relate to one or more of Cloud storage (Cloud storage), cloud computing (Cloud computing), and Cloud Database (Cloud Database) in Cloud technology. For example, the first information content, the target vocabulary and/or the target picture are obtained from the cloud database, or a special mark version corresponding to the target novel is generated through cloud computing, or target information associated with the target vocabulary is stored through cloud storage.
Fig. 1 is a block diagram of an interactive novel picture information processing system according to an embodiment of the present application. The interactive novel picture information processing system may include a terminal device 101 and a server 102. The number of terminal devices and servers shown in fig. 1 is for example, and is not limited to this application, and for example, more than 2 servers may be included in practical applications.
The terminal device 101 may be a terminal device having a picture information processing function and a data processing function in an interactive novel, and may be implemented in various forms. For example, the terminal device may be a smart phone, a smart wearable device (e.g., a smart watch), a tablet computer, a notebook computer, a desktop computer, a smart voice interaction device, a smart home appliance, a vehicle-mounted terminal, and so forth. The embodiment of the application does not limit the specific technology and the specific equipment form adopted by the terminal equipment.
The server 102 may be a server providing services for the terminal device 101. The server 102 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligence platforms, and the like.
In this embodiment, the server 102 may be configured to store the target novels, the target pictures, and the information, so as to transmit the data to the terminal device 101 when the terminal device 101 obtains or displays the target novels, the target pictures, or the information. In fig. 1, the description is given by taking the example in which data such as a target novel, a target picture, and information are stored in the server 102, and the present application is not limited thereto. Alternatively, the target novels may be stored on one server and the target pictures and information may be stored on another server, as this application is not limiting.
The method for processing the picture information in the interactive novel provided by the embodiment of the application is described in detail below. The method for processing the picture information in the interactive novel can be executed by a picture information processing device in the interactive novel. The picture information processing device in the interaction novel can be a terminal device or a device integrated in the terminal device, such as a processor or a chip. The embodiment of the application is illustrated by taking a method for processing picture information in an interactive novel executed by terminal equipment as an example.
Referring to fig. 2, fig. 2 is a flowchart of a method for processing picture information in an interactive novel according to an embodiment of the present application. The picture information processing method in the interactive novel comprises the following steps:
s201, displaying first information content of the target novel on a first display interface.
The first display interface may be a display interface for displaying the first information content of the target novel by the terminal device. If the user reads the target novel for the first time, the terminal device may display a display interface when the target novel is opened for the first time on the first display interface, for example, the terminal device may display the content of the catalog of the target novel on the first display interface, or may display the content of the first page of the text of the target novel on the first display interface. If the user is not the first reading target novice, the terminal device may display the display interface of the target novice when the target novice is displayed last time on the first display interface, which is not limited in this application.
By way of example, taking a terminal device as a smart watch, as shown in fig. 3A, a user may initiate a click command for an application program (such as application B) on a desktop of the terminal device for reading novels, and correspondingly, the terminal device may receive and respond to the click command to open application B; as shown in fig. 3B, the user may initiate a click command for a target novel (e.g., novel D) in the application B, and the terminal device may receive and respond to the click command to display the first information content of the target novel on the first display interface.
The target novels can be novels with audio, namely novels which can be played, such as audio novels, interactive novels and the like. The target novel may be a novel that performs a clicking operation after the user enters the application. The target novel can be the novel which is read by the user for the first time, namely the novel which is opened and displayed by the terminal equipment for the first time; alternatively, the target novel may be a novel that the user does not read for the first time, that is, a novel that the terminal device does not open and display for the first time, which is not limited in this application.
The first information content may be a piece of information content corresponding to the target novel displayed on the first display interface after the terminal device opens the target novel. The first information content may be text information content, such as a section of text of a target novel; alternatively, the first information content may also be a voice information content, such as a voice information piece corresponding to a text of the target novel, which is not limited in this application. Specifically, the content included in the first information content may be one or more sentences, or may be one or more paragraphs, which is not limited in this application. For example, if the user is to read the target novel for the first time, the first information content may be the content of the first page of the text of the target novel; if the user is not the first reading target novel, the first information content can be the content corresponding to the last reading target novel.
Specifically, the first information content may include a target vocabulary having a special tag; the target vocabulary is a vocabulary except the vocabulary contained in the vocabulary library corresponding to the age information of the current use object, namely the target vocabulary does not belong to the vocabulary library corresponding to the age information of the current use object. For example, assume that "landmark building a" is a vocabulary other than the vocabulary contained in the vocabulary library corresponding to the age information of the current usage object, that is, "landmark building a" does not belong to the vocabulary library corresponding to the age information of the current usage object; if "landmark building a" exists in the first information content of the target novel, the terminal device may make a special mark on the "landmark building a" so that the "landmark building a" is the target vocabulary with the special mark in the first information content.
It can be understood that, by specially marking the vocabulary outside the vocabulary contained in the vocabulary library corresponding to the age information of the current usage object, the terminal device can obtain the target vocabulary beyond the understanding scope of the current usage object, so that in the process of reading the target novels, the target vocabulary can be interpreted (the target vocabulary is displayed through the target pictures and the science popularization is carried out on the target vocabulary as mentioned below), and the understanding of the current usage object on the target novels is improved.
For example, assuming that the first information content is text information content, as shown in fig. 3C, fig. 3C is an example of a terminal device as a smart phone, and schematically illustrates a first information content of a target novel displayed on the first display interface. As can be seen from fig. 3C, in the first information content of the target novel displayed on the first display interface of the terminal device, "landmark building a" is a target vocabulary with a special mark. Wherein the specific labels (i.e., bolded and underlined) of the target words in fig. 3C are for illustration only, and are not limiting in this application.
Assuming that the first information content is voice information content, as shown in fig. 3D, fig. 3D is an example of a terminal device as a smart watch, and schematically shows that the first display interface displays the first information content of the target novice. As can be seen from fig. 3D, the user can see the voice information bar of the first information content on the first display interface, and cannot see the text corresponding to the first information content (such as the text of the target vocabulary), and it can be understood that the terminal device may also make a special mark on the target vocabulary, so that the terminal device may identify the target vocabulary, and thus, in the subsequent picture information processing flow, display operation may be performed on the target vocabulary. The following description uses the first information content as the text information content as an example, and is not limited to this application.
Optionally, the first display interface of the terminal device may also be used to display the first information content of the target novel, and/or other content related to the target novel, such as a title of the target novel, a video or animation or picture related to the target novel, and/or other information, such as an interactive character, a page number, and the like, which is not limited in this application. As shown in fig. 3E, fig. 3E schematically illustrates first information content of a target novel and a schematic view of an interactive character displayed on a first display interface. Optionally, in the case where there is an interactive character, the terminal device may interact with the user through the form of the interactive character and output voice information (such as a first voice prompt information to be mentioned later), and may display text information corresponding to the voice information through a character dialog box as shown in fig. 3E, which is not limited in this application.
It should be noted that, when only the first display interface exists on the display screen of the terminal device, the size of the first display interface may be related to the size of the display screen of the terminal device. By way of example, taking fig. 3C, 3D and 3E as examples, the size of the first display interface may be the size of the display screen of the terminal device. Optionally, when there are multiple display interfaces on the display screen of the terminal device, the size of the first display interface may be related to the size of the display screen of the terminal device and the sizes of other display interfaces, and a portion of the display screen of the terminal device may be used to display the first display interface, and another portion of the display screen of the terminal device may be used to display the other display interfaces.
It will be appreciated that the size of the first display interface may be adjusted according to the needs of the user. Specifically, the user may initiate a size adjustment instruction for the first display interface, and accordingly, the terminal device may receive and respond to the size adjustment instruction to adjust the size of the first display interface, which is not limited in this application. For example, as shown in fig. 3F, the user may drag the frame of the first display interface on the terminal device to adjust the size of the first display interface. Optionally, the user may set the size of the first display interface in the system setting of the terminal device, which is not limited in this application.
S202, playing the first information content according to the text sequence corresponding to the first information content, acquiring target pictures corresponding to the target vocabulary when the target vocabulary is played, and displaying the target pictures on the second display interface.
The target picture may be a picture associated with the target vocabulary, and the target picture may be used to display the target vocabulary. For example, assuming that the target vocabulary is "landmark building a", the target picture may be a picture showing landmark building a, such as a night view of landmark building a; assuming that the target vocabulary is "the ancient points of interest a", the target picture may be a picture showing the ancient points of interest a, such as a snowscape picture of the ancient points of interest a, etc., which is not limited in this application.
The second display interface may be a display interface for displaying the target picture for the terminal device. It should be noted that the second display interface may be the same as the first display interface, which is not limited in this application. That is, the terminal device may display the target picture in the first display interface, for example, present the target picture in a blank of the first display interface, or the like. As shown in fig. 3G, fig. 3G exemplarily shows a schematic view of the second display interface. As can be seen from fig. 3G, the target vocabulary included in the first information content is "landmark building a", and the second display interface exemplarily shows a picture of "landmark building a". Alternatively, the second display interface may be displayed in a floating manner on the first display interface, and the second display interface may also be embedded in the first display interface, which is not limited in this application.
Taking fig. 3C as an example of the first display interface, that is, the first information content is "the main body of landmark building a is a multi-cylinder structure … …", and the target vocabulary is "landmark building a"; the terminal device can play the main body of the landmark building A in a multi-cylinder structure according to the text sequence, and when the landmark building A is played, a second display interface shown in fig. 3G is displayed, namely, a picture corresponding to the landmark building A is displayed. It can be understood that by displaying the target picture to the user, the imagination and active thinking capability of the user can be improved, and the comprehensive development of intelligence quotient of the intelligence quotient can be facilitated.
Alternatively, the target picture may be one or more, which is not limited in this application. And under the condition that the target pictures are multiple, the terminal equipment can sequentially display the multiple target pictures on the second display interface. Optionally, the terminal device may set a display stay time of each target picture, so as to automatically close the target picture when the display stay time is exceeded. Alternatively, the terminal device may set the number of target pictures corresponding to the target vocabulary, for example, the terminal device may set the number of target pictures corresponding to the target vocabulary to not more than 3, or the like, which is not limited in this application.
In one implementation manner, the terminal device may obtain a target picture corresponding to the target vocabulary according to an association relationship between the plurality of vocabularies and the plurality of pictures; or the terminal device can call a search engine to acquire the target picture corresponding to the target vocabulary.
It should be noted that different words may be associated with the pictures that correspondingly show the words. For example, the word "landmark building a" may be associated with "landmark building a night view" map; the word "attraction A" may be associated with "attraction A snow view", etc. The local database of the terminal device may store association relations between a plurality of vocabularies and a plurality of pictures, that is, when the terminal device finishes playing the target vocabularies, the terminal device may obtain the target pictures corresponding to the target vocabularies according to the association relations between the plurality of vocabularies and the plurality of pictures stored in the local database. Alternatively, the association relationship between the plurality of vocabularies and the plurality of pictures may also be stored in a server connected to the terminal device or a cloud database accessible to the terminal device, which is not limited in this application.
In one implementation, when the terminal equipment finishes playing the target vocabulary, acquiring a target picture corresponding to the target vocabulary, and outputting third voice prompt information for indicating that the target picture exists; and when a third voice command input for the target picture is received and the third voice command indicates to display the target picture, responding to the third voice command and displaying the target picture on the second display interface.
After the terminal device obtains the target picture corresponding to the target vocabulary, the terminal device may output the third voice prompt information for indicating that the target picture exists, so as to ask the user whether to display the target picture corresponding to the target vocabulary; correspondingly, the user can receive and respond to the third voice prompt information and output a third voice instruction; correspondingly, the terminal equipment can receive the third voice instruction, respond to the third voice instruction when the third voice instruction indicates to display the target picture, and display the target picture on the second display interface.
Optionally, when outputting the voice information (such as the third voice prompt information and the first voice prompt information to be mentioned later), the terminal device may generate text information corresponding to the voice information and display the text information on a display interface (such as a third display interface) of the terminal device. The third display interface may be displayed on the first display interface in a floating manner, and the third display interface may also cover the first display interface, and optionally, the third display interface may also be presented by an interactive character.
Exemplary, assume that the third voice prompt outputted by the terminal device is "please ask for a picture showing' landmark building aAs shown in fig. 3H, the terminal device may hover-display text information corresponding to the third voice prompt information on the first display interface (for example, fig. 3C); optionally, as shown in fig. 3I, the terminal device may display text information corresponding to the third voice prompt information through an interactive character in the first display interface (for example, fig. 3E).
The terminal equipment can indicate that the target picture related to the target vocabulary exists to the user by outputting the third voice prompt information, and when receiving a third voice instruction from the user for indicating and displaying the target picture, the terminal equipment displays the target picture on the second display interface, so that the interactivity between the terminal equipment and the user can be enhanced, and the voice expression capability, the logic thinking capability and the active thinking capability of the user are improved in the voice interaction mode, further the open-loop character of the user is cultivated, and the full-range development of intelligence quotient of the intelligence quotient is facilitated.
In one implementation manner, when the terminal device finishes playing the target vocabulary and the display frequency of the target pictures in the display record is lower than the threshold value, the terminal device obtains the target pictures corresponding to the target vocabulary and displays the target pictures on the second display interface.
The display records can be used for displaying related data records counted in the past display process of each picture, such as display frequency, display times or display positions. For example, the display record of the target novel may show that the number of times of displaying the picture 1 is 10 times, the number of times of displaying the picture 2 is 2 times, and the like, which is not limited in this application.
When the terminal device obtains that the display frequency of the target picture in the display record is lower than the threshold value, that is, the frequency of displaying the target vocabulary through the target picture is lower, which means that the user knows that the frequency of the target vocabulary is lower through the target picture, and the user is possibly not familiar with the target vocabulary, the terminal device can obtain the target picture corresponding to the target vocabulary when the target vocabulary is completely played, and display the target picture with the second display interface.
Optionally, when the terminal device obtains that the display frequency of the target picture in the display record is higher than the threshold value, that is, the frequency of displaying the target vocabulary through the target picture is higher, which means that the user knows that the frequency of the target vocabulary is higher through the target picture, and possibly the user is familiar with the target vocabulary, the terminal device can not continuously display the target picture corresponding to the target vocabulary in the subsequent playing process.
Optionally, when the terminal device obtains that the display frequency of the target picture in the display record is higher than the threshold value, the terminal device may output voice prompt information (such as fourth voice prompt information) when the target vocabulary is played, so as to inquire whether to continue displaying the target picture corresponding to the target vocabulary; accordingly, the user may receive and respond to the fourth voice prompt to output a voice command (e.g., referred to as a fourth voice command); correspondingly, the terminal equipment can receive the fourth voice instruction, and respond to the fourth voice instruction when the fourth voice instruction indicates that the target picture corresponding to the target vocabulary is not continuously displayed, and the target picture corresponding to the target vocabulary is not continuously displayed in the subsequent playing process.
Optionally, when the terminal device obtains that the display frequency of the target picture in the display record is higher than the threshold value, the terminal device may cancel the special mark of the target vocabulary when playing the text content (such as the second text content) after the first information content, so that the target picture corresponding to the target vocabulary is not obtained in the subsequent playing process, thereby reducing the burden of the terminal device for processing the picture information, more accurately outputting the picture content to be displayed, and further improving the user experience.
The terminal equipment displays the first information content of the target novels related with the target pictures and provided with the target words with special marks on the first display interface, can play the first information content according to the word sequence corresponding to the first information content, and acquires the target pictures corresponding to the target words when the target words are played, so as to display the target pictures on the second display interface, thereby realizing reading of the target novels and display of the target words in one application program, avoiding switching back and forth in a plurality of application programs, further being beneficial to improving the performance of the terminal equipment and saving the energy consumption of the terminal equipment.
S203, outputting first voice prompt information when the sentences corresponding to the target vocabulary are played.
The first voice prompt information may prompt that target information associated with the target vocabulary exists. For example, when the target vocabulary is "point of interest A", the first voice prompt may be "whether to listen to the story of" point of interest A"or" whether to continue exploring the Olympic ++of the' point of interest ancient A>"etc., to which this application is not limited. Optionally, the target information associated with the target vocabulary may be science popularization information or other information, which is not limited in this application. When the terminal equipment finishes playing the sentences corresponding to the target vocabulary, the terminal equipment can prompt the user of the target information related to the target vocabulary by outputting the first voice prompt information, so that the user can be inquired whether to play the target information or not, and the interactivity is further enhanced.
For example, assuming that the target vocabulary is "the ancient site of interest a", the sentence corresponding to the target vocabulary is "the place is as shocked as the ancient site of interest a … …", the terminal device finishes playing "the placeWhen the place is shocked … … like the ancient site A, a first voice prompt message such as "whether to continue exploring the mystery of the ancient site A", to prompt the user for the presence of target information associated with" point of interest ancient a ".
Optionally, the target information associated with the target vocabulary may include information for introducing the target vocabulary, and/or information for introducing the target picture. For example, assuming that the target vocabulary is "attraction a", and the target picture is "attraction a snow view", the target information associated with "attraction a" may be information describing "attraction a", and/or information describing "attraction a snow view", which is not limited in this application.
In one implementation, the target information associated with the target vocabulary includes first information and/or second information. The first information can be used for introducing a target vocabulary; the second information is used for introducing the target vocabulary and the vocabulary with the association relation with the target vocabulary.
For example, taking the target information as the science popularization information as an example, the range of the science popularization content included in the first information may be smaller than the range of the science popularization content included in the second information. That is, the second information may include more content than the first information, i.e., the second information may have a higher science popularization depth than the first information.
Optionally, before outputting the first voice prompt information prompting the existence of the target information associated with the target vocabulary, the terminal device may acquire the read data of the current use object, so as to determine whether the current use object reads other novels corresponding to the target vocabulary according to the read data. The other novels corresponding to the target vocabulary may be other novels including the target vocabulary, or may be other novels of the same type as the target novels corresponding to the target vocabulary, which is not limited in this application.
For example, assuming that the target vocabulary is "attraction a", the terminal device may determine whether the currently used object has read other novels including "attraction a" according to the read data of the currently used object. Alternatively, assuming that the target novels belong to the "showplace" category, the terminal device may determine, based on the read data, whether the currently used object has read other novels belonging to the "showplace" category.
Further, in the case that the terminal device determines that the current usage object does not read other novels corresponding to the target vocabulary according to the read data, the terminal device may acquire the first information, so as to introduce the target vocabulary to the user through the first information. Under the condition that the terminal equipment determines that the current usage object reads other novels corresponding to the target vocabulary according to the read data, the terminal equipment can acquire second information so as to introduce the target vocabulary and the vocabulary with association relation with the target vocabulary to the user through the second information.
For example, assuming that the target vocabulary is "the ancient site a", if the currently used object does not read other novels corresponding to the "the ancient site a", the terminal device may perform a corresponding science popularization on the vocabulary of the "the ancient site a"; if the current usage object reads other novels corresponding to the points of interest ancient points A, the terminal device can perform corresponding science popularization aiming at the vocabulary of the points of interest ancient points A, and also perform corresponding science popularization aiming at the vocabulary with association relation with the points of interest ancient points A, such as the vocabulary of the points of interest ancient points, and the like, and the application is not limited.
In one implementation manner, when a play record of the first information exists in the play record, the terminal device may acquire the second information and play the second information; or, when the play record does not have the first information in the play record, the terminal device may acquire the first information and play the first information.
When the play record does not have the first information, the terminal device may acquire the first information and play the first information. Optionally, when the play record of the first information exists in the play record, it is indicated that the terminal device plays the first information to the current use object, at this time, the terminal device may acquire the second information, so as to play the second information with a wider popular science range to the current use object, thereby making the target information associated with the target vocabulary richer, further expanding the knowledge plane of the user, and being beneficial to improving the logic thinking capability and imagination of the user.
Optionally, the terminal device may further replace the vocabulary in the first information or the second information according to the age information of the current usage object, so as to generate information (such as called third information) more in line with the age information of the current usage object. For example, when the terminal device determines to play the first information, the terminal device may replace, according to age information of the current usage object, a vocabulary in the first information, which does not belong to a vocabulary library corresponding to the age information of the current usage object, so as to generate target information that can be understood by the current usage object, thereby improving validity of the target information output by the terminal device, and further improving capability of the user while expanding knowledge plane of the user.
S204, when a first voice command input for target information is received and the first voice command indicates to play the target information, responding to the first voice command, acquiring the target information and playing the target information.
After the terminal device outputs the first voice prompt information, the current use object can receive and respond to the first voice prompt information and output a first voice instruction; correspondingly, the terminal equipment can receive the first voice instruction, and when the first voice instruction indicates to play the target information, the terminal equipment responds to the first voice instruction to acquire and play the target information. The first voice command may be a reply sentence made by the current use object for the first voice prompt information, such as "please talk 'ancient site a' story" or "continue to explore the mystery of the 'ancient site a'.
Optionally, the terminal device may play the target information associated with the target vocabulary in a voice manner, or the terminal device may also display text content corresponding to the target information on a display interface (such as a fourth display interface) of the terminal device in a text manner while playing the target information in a voice manner.
Illustratively, as shown in fig. 3J, fig. 3J illustratively shows a schematic view of a fourth display interface. The fourth display interface in fig. 3J is displayed above the first display interface in a floating manner, which is only used for example and not limiting the application. Alternatively, the fourth display interface may be displayed on the second display interface in a floating manner, which is not limited in this application.
Optionally, when a first voice command input for the target information is received and the first voice command indicates that the target information is not played, the terminal device may respond to the first voice command and play the sentence after the sentence corresponding to the target vocabulary, which is not limited in this application.
For example, referring to fig. 3K, fig. 3K schematically illustrates an interface flowchart corresponding to a method for processing picture information in an interactive novel according to an embodiment of the present application. In fig. 3K, the terminal device is used to display the target picture corresponding to the target vocabulary and receive the first voice command indicating to play the target information, which is not limited in this application. Optionally, the terminal device in fig. 3K suspends and displays the target picture on the first information content, which is only used for example and is not limited to the present application. Alternatively, in fig. 3K, the target information associated with the target vocabulary is illustrated as popular science information, which is not limited to this application.
In the embodiment of the invention, the terminal equipment can play the first information content according to the word sequence corresponding to the first information content by displaying the first information content comprising the target novels with the target vocabularies with special marks on the first display interface, and acquire the target pictures corresponding to the target vocabularies when the target vocabularies are completely played, so as to display the target pictures on the second display interface, and output the first voice prompt information for prompting the existence of target information related to the target vocabularies when sentences corresponding to the target vocabularies are completely played, thereby acquiring and playing the target information when the first voice command input for the target information is received and the first voice command indicates the playing of the target information, further improving the richness of the first information content comprising the target vocabularies by the terminal equipment, enhancing the interactivity of the terminal equipment and the current use object, being beneficial to improving the performance of the terminal equipment and saving the energy consumption of the terminal equipment.
Referring to fig. 4, fig. 4 is a flowchart of a method for processing picture information in an interactive novel according to an embodiment of the present application. The picture information processing method in the interactive novel comprises the following steps:
S401, outputting second voice prompt information when receiving a first reading instruction input for the target novel.
The second voice prompt information can be used for acquiring age information of the current using object. That is, when a user reads a target novel for the first time, the user may input a first reading instruction for the target novel; correspondingly, the terminal equipment can receive a first reading instruction input aiming at the target novel and respond to the first reading instruction to output second voice prompt information so as to acquire age information of the user (namely the current use object). The second voice prompt may be an inquiry sentence for age information, such as "child ask you for younger cheers". Optionally, the second voice prompt may be other inquiry sentences which can acquire age information, such as "child ask you about ∈ly in kindergarten->"etc., to which this application is not limited.
S402, when a second voice instruction input aiming at the second voice prompt information is received and the second voice instruction indicates age information of a current use object, a special mark version corresponding to the target novel is generated according to the age information of the current use object and an original version corresponding to the target novel.
Wherein the specific identification version corresponding to the target novel may include the first information content.
From the foregoing, it can be known that, when the terminal device outputs the second voice prompt, the text prompt corresponding to the second voice prompt may also be generated, and the text prompt is displayed on the display interface of the terminal device, which is not limited in this application.
After the terminal device outputs the second voice prompt information, the user can receive and respond to the second voice prompt information and output a second voice instruction; correspondingly, the terminal equipment can receive the second voice instruction, and when the received second voice instruction indicates the age information of the current use object, the special mark version corresponding to the target novel is directly generated according to the acquired age information and the original version of the target novel. For example, the second voice command may be a reply sentence made by the user for the second voice prompt, such as a sentence of "i am five years old" or "i am kindergarten" or the like. By the mode, the user can perform voice interaction with the terminal equipment, so that the quick thinking capability, language expression capability, logic thinking capability and the like of the user can be improved.
Alternatively, the second voice command may not indicate age information of the currently used object, for example, the terminal device may receive a statement such as "i love story". For example, when the second voice command received by the terminal device cannot indicate the age information of the current usage object, the terminal device may estimate the age information of the current usage object according to the second voice command; for example, the terminal device may estimate that the age of the currently used subject may be 3 years old based on the received "i love story". Optionally, the terminal device may set age information of the current use object according to a system default value; alternatively, the terminal device may set the age information of the usage object obtained by reading the previous novel as the age information of the current usage object, which is not limited in this application.
Further, after the terminal device determines the age information of the current usage object, the terminal device may generate a special mark version corresponding to the target novel according to the age information and the original version of the target novel. The original version of the target novel may be a version without a special mark, and the version of the target novel with the special mark may be a version with the special mark. It will be appreciated that the first information content described above may be included in the specific mark-up version corresponding to the target novel.
It should be noted that, the terminal device may generate, according to different age information and original versions of the target novels, different special mark versions corresponding to the target novels, for example: the terminal equipment can generate a 3-year-old special mark version corresponding to the target novel according to the age information of '3 years' and the original version of the target novel; alternatively, the terminal device may generate the 10.5 year old special mark version corresponding to the target novel according to the age information of "10.5 years" and the original version of the target novel, which is not limited in this application.
S403, displaying the first information content of the target novel on the first display interface.
The first information content may include a target vocabulary with a special tag, where the target vocabulary may be a vocabulary other than a vocabulary included in a vocabulary library corresponding to age information of the current usage object.
S404, playing the first information content according to the text sequence corresponding to the first information content, acquiring a target picture corresponding to the target vocabulary when the target vocabulary in the first information content is played, and displaying the target picture on the second display interface.
Optionally, after the terminal device obtains the target picture corresponding to the target vocabulary, a third voice prompt message may be output, so as to indicate that the target picture exists; and when a third voice command input for the target picture is received and the third voice command indicates to display the target picture, responding to the third voice command and displaying the target picture on the second display interface.
S405, outputting first voice prompt information when the sentences corresponding to the target vocabulary are played.
The first voice prompt information may prompt that target information associated with the target vocabulary exists.
S406, when a first voice command input for the target information is received and the first voice command indicates to play the target information, the target information is acquired and played in response to the first voice command.
S407, optionally, when the terminal device receives the first voice command input for the target information and the first voice command indicates that the target information is not played, playing the sentence after the sentence corresponding to the target vocabulary in response to the first voice command.
It should be noted that, the details of the foregoing S401-S407 may be referred to the detailed description of the corresponding embodiment of fig. 2, and the disclosure is not repeated herein.
In the embodiment of the application, when receiving the first reading instruction input for the target novel, the terminal device outputs the second voice prompt information for obtaining the age information of the current use object, and when receiving the second voice instruction indicating the age information of the current use object, the terminal device can generate a special mark version corresponding to the target novel including the first information content according to the age information of the current use object and the original version of the target novel, so that the first information content including the target vocabulary with the special mark is displayed on the first display interface, the first information content is played according to the text sequence corresponding to the first information content, and when the target vocabulary is played, a target picture corresponding to the target vocabulary is obtained, and the target picture is displayed on the second display interface, so that the target vocabulary is displayed to the user through the target picture; and when the sentences corresponding to the target vocabulary are played, outputting first voice prompt information for prompting the existence of target information related to the target vocabulary, so that when a first voice instruction input for the target information is received and the first voice instruction indicates to play the target information, the target information is acquired and played in response to the first voice instruction, the richness of the first information content comprising the target vocabulary of the terminal equipment is further improved, the interactivity of the terminal equipment and the current use object is enhanced, the performance of the terminal equipment is improved, and the energy consumption of the terminal equipment is saved.
In addition, in the specific embodiment of the application, the target novel, the first information content, the target vocabulary, the target picture and the target information are related to the related data in the operation process of the picture information processing method in the interactive novel, and the related data are authorized by the user. When the above embodiments of the present application are applied to a particular product or technology, the data involved requires user approval or consent, and the collection, use and processing of the relevant data requires compliance with relevant laws and regulations and standards.
Further, referring to fig. 5, fig. 5 is a schematic structural diagram of an interactive novel picture information processing device according to an embodiment of the present application. As shown in fig. 5, the apparatus 500 for processing picture information in interactive novels may be applied to the terminal device in the corresponding embodiment of fig. 1. Specifically, the interactive novel picture information processing apparatus 500 may be a computer program (including program code) running in a terminal device, for example, the interactive novel picture information processing apparatus 500 is an application software; the interactive novel picture information processing apparatus 500 may be used to perform the corresponding steps in the methods provided in the corresponding embodiments of fig. 2 and 4.
The interactive novel picture information processing apparatus 500 may include: a display unit 501, a playback unit 502, an acquisition unit 503, and a processing unit 504.
A display unit 501, configured to display, on a first display interface, first information content of a target novel, where the first information content includes a target vocabulary with a special mark, and the target vocabulary is a vocabulary other than a vocabulary included in a vocabulary library corresponding to age information of a current usage object;
a playing unit 502, configured to play the first information content according to a text sequence corresponding to the first information content;
an obtaining unit 503, configured to obtain a target picture corresponding to the target vocabulary when the target vocabulary is played; the display unit 501 is further configured to display a target picture on a second display interface; the target pictures are used for displaying target words;
the processing unit 504 is configured to output a first voice prompt when the sentence corresponding to the target vocabulary is played, where the first voice prompt prompts that target information associated with the target vocabulary exists;
the obtaining unit 503 is further configured to, when a first voice command input for the target information is received and the first voice command indicates to play the target information, obtain the target information in response to the first voice command; the playing unit 502 is further configured to play the target information.
In one implementation, the processing unit 504 is further configured to output, when receiving a first reading instruction input for the target novel, second voice prompt information, where the second voice prompt information is used to obtain age information of the current usage object; the processing unit 504 is further configured to generate, when a second voice command input for the second voice prompt information is received and the second voice command indicates age information of a currently used object, a special mark version corresponding to the target novels according to the age information of the currently used object and an original version corresponding to the target novels, where the special mark version corresponding to the target novels includes the first information content.
In one implementation manner, the obtaining unit 503 is further configured to obtain a target picture corresponding to the target vocabulary according to an association relationship between the plurality of vocabularies and the plurality of pictures; or, the obtaining unit 503 is further configured to invoke a search engine to obtain a target picture corresponding to the target vocabulary.
In one implementation, the target information associated with the target vocabulary includes first information and/or second information, the first information being used to introduce the target vocabulary; the second information is used for introducing the target vocabulary and the vocabulary with the association relation with the target vocabulary.
In one implementation, when there is a play record of the first information in the play record, the obtaining unit 503 is further configured to obtain the second information, and the playing unit 502 is further configured to play the second information; alternatively, when there is no play record of the first information in the play records, the obtaining unit 503 is further configured to obtain the first information, and the playing unit 502 is further configured to play the first information.
In one implementation manner, the playing unit 502 is further configured to, when a first voice command input for the target information is received and the first voice command indicates that the target information is not played, respond to the first voice command and play the sentence after the sentence corresponding to the target vocabulary.
In one implementation, when the target vocabulary is played, the obtaining unit 503 is further configured to obtain a target picture corresponding to the target vocabulary, and the processing unit 504 is further configured to output third voice prompt information, where the third voice prompt information is used to indicate that the target picture exists; the display unit 501 is further configured to, when receiving a third voice command input for the target picture and the third voice command indicates to display the target picture, respond to the third voice command and display the target picture on the second display interface.
In one implementation, when the target vocabulary is played, and the display frequency of the target pictures in the display record is lower than the threshold value, the obtaining unit 503 is further configured to obtain the target pictures corresponding to the target vocabulary, and the display unit 501 is further configured to display the target pictures on the second display interface.
According to the embodiment of the application, each unit in the image information processing device in the interactive novel shown in fig. 5 may be combined into one or several other units separately or all, or some (some) units may be further split into a plurality of units with smaller functions to form the same operation, which does not affect the implementation of the technical effects of the embodiment of the application. The above units are divided based on logic functions, and in practical applications, the functions of one unit may be implemented by a plurality of units, or the functions of a plurality of units may be implemented by one unit. In other embodiments of the present application, the image information processing device based on the interactive novel may also include other units, and in practical applications, these functions may also be implemented with assistance of other units, and may be implemented by cooperation of multiple units.
According to the embodiments of the present application, the interactive novel picture information processing apparatus as shown in fig. 5 may be constructed by running a computer program (including program code) capable of executing the steps involved in the respective methods as shown in fig. 2 and 4 on a general-purpose computing device such as a computer including a processing element such as a Central Processing Unit (CPU), a random access storage medium (RAM), a read-only storage medium (ROM), and the like, and the storage element, and the interactive novel picture information processing method of the embodiments of the present application may be implemented. The computer program may be recorded on a computer storage medium, and loaded into and executed by the computing device via the computer storage medium.
Further, referring to fig. 6, fig. 6 is a schematic structural diagram of a computer device according to an embodiment of the present application. The computer device 600 may also be used to implement the functionality of block chain nodes in the method embodiments described above. As shown in fig. 6, the computer device 600 may include at least: a processor 601, a communication interface 602, and a computer storage medium 603. Wherein the processor 601, the communication interface 602, and the computer storage medium 603 may be connected by a bus or other means.
The computer storage medium 603 may be stored in a memory 604 of the computer device 600, the computer storage medium 603 being adapted to store a computer program comprising program instructions, the processor 601 being adapted to execute the program instructions stored in the computer storage medium 603. The processor 601 (or CPU (Central Processing Unit, central processing unit)) is a computing core and a control core of the computer device 600, which is adapted to implement one or more instructions, in particular to load and execute:
displaying first information content of the target novel on a first display interface; the first information content comprises a target vocabulary with special marks, wherein the target vocabulary is a vocabulary except the vocabulary contained in a vocabulary library corresponding to the age information of the current use object;
playing the first information content according to the text sequence corresponding to the first information content, acquiring target pictures corresponding to the target vocabulary when the target vocabulary is played, and displaying the target pictures on the second display interface; the target pictures are used for displaying target words;
outputting first voice prompt information when sentences corresponding to the target vocabulary are played, wherein the first voice prompt information prompts that target information related to the target vocabulary exists;
When a first voice command input for target information is received and the first voice command indicates to play the target information, responding to the first voice command, acquiring the target information and playing the target information.
In one implementation, the processor 601 is further configured to output a second voice prompt when receiving a first reading instruction input for the target novel, where the second voice prompt is used to obtain age information of the current usage object; when a second voice instruction input aiming at the second voice prompt information is received and indicates age information of a currently used object, a special mark version corresponding to a target novel is generated according to the age information of the currently used object and an original version corresponding to the target novel, and the special mark version corresponding to the target novel comprises first information content.
In one implementation, the processor 601 is further configured to obtain a target picture corresponding to the target vocabulary according to an association relationship between the plurality of vocabularies and the plurality of pictures; or, calling a search engine to obtain a target picture corresponding to the target vocabulary.
In one implementation, the target information associated with the target vocabulary includes first information and/or second information, the first information being used to introduce the target vocabulary; the second information is used for introducing the target vocabulary and the vocabulary with the association relation with the target vocabulary.
In one implementation, the processor 601 is further configured to obtain the second information and play the second information when there is a play record of the first information in the play record; or when the play record of the first information does not exist in the play record, the first information is acquired, and the first information is played.
In one implementation, the processor 601 is further configured to, when a first voice command input for the target information is received and the first voice command indicates that the target information is not played, play a sentence after the sentence corresponding to the target vocabulary in response to the first voice command.
In one implementation, the processor 601 is further configured to obtain a target picture corresponding to the target vocabulary when the target vocabulary is played, and output third voice prompt information, where the third voice prompt information is used to indicate that the target picture exists; and when a third voice command input for the target picture is received and the third voice command indicates to display the target picture, responding to the third voice command and displaying the target picture on the second display interface.
In one implementation manner, the processor 601 is further configured to obtain a target picture corresponding to the target vocabulary when the target vocabulary is played and the display frequency of the target picture in the display record is lower than a threshold value, and display the target picture on the second display interface.
It should be understood that the computer device 600 described in the embodiment of the present application may perform the description of the method for processing the picture information in the interaction novel in the embodiment corresponding to fig. 2 and 4, and the description of the apparatus 500 for processing the picture information in the interaction novel in the embodiment of fig. 5 will not be repeated here. In addition, the description of the beneficial effects of the same method is omitted.
Furthermore, it should be noted here that: the embodiment of the present application further provides a computer storage medium, in which a computer program executed by the aforementioned interactive novel picture information processing apparatus 500 is stored, and the computer program includes program instructions, when executed by a processor, can execute the description of the interactive novel picture information processing method in the corresponding embodiment of fig. 2 and fig. 4, and therefore, a detailed description will not be given here. In addition, the description of the beneficial effects of the same method is omitted. For technical details not disclosed in the embodiments of the computer storage medium related to the present application, please refer to the description of the method embodiments of the present application. As an example, the program instructions may be deployed to be executed on one computer device or on multiple computer devices at one site or, alternatively, distributed across multiple sites and interconnected by a communication network, where the multiple computer devices distributed across multiple sites and interconnected by a communication network may be combined into a blockchain network.
According to one aspect of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the computer device can perform the method in the embodiment corresponding to fig. 2 and fig. 4, which will not be described herein.
Those of ordinary skill in the art will appreciate that the elements and steps of the examples described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or as a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions described in accordance with embodiments of the present invention are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable devices. The computer instructions may be stored in or transmitted across a computer storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). Computer storage media may be any available media that can be accessed by a computer or data storage devices, such as servers, data centers, etc. that contain an integration of one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy Disk, a hard Disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
The foregoing is merely specific embodiments of the present application, but the protection scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present disclosure, and all changes and substitutions are intended to be covered by the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. The picture information processing method in the interaction novel is characterized by being applied to an intelligent watch of an interaction novel service system, wherein the interaction novel service system comprises a server and the intelligent watch, and the server is in communication connection with the intelligent watch; the method comprises the following steps:
the intelligent watch interacts with the server to display first information content of a target novel on a first display interface; the first information content is displayed in the first display interface in the form of a voice information strip, and characters corresponding to the first information content are not displayed on the first display interface; the first information content comprises a target vocabulary with special marks, wherein the target vocabulary is a vocabulary except the vocabulary contained in a vocabulary library corresponding to age information of a current use object;
Playing the first information content according to the text sequence corresponding to the first information content, acquiring a target picture corresponding to the target vocabulary when the target vocabulary is played, and displaying the target picture on a second display interface; the target pictures are used for displaying the target vocabulary;
outputting first voice prompt information when sentences corresponding to the target vocabulary are played, wherein the first voice prompt information prompts that target information related to the target vocabulary exists; the target information associated with the target vocabulary comprises first information and/or second information; the first information is used for introducing the target vocabulary, and the second information is used for introducing the target picture;
when a first voice command input for the target information is received and the first voice command indicates to play the target information, responding to the first voice command, interacting with the server to acquire the target information and playing the target information so as to improve the capability of the child user.
2. The method according to claim 1, wherein the method further comprises:
outputting second voice prompt information when receiving a first reading instruction input aiming at the target novel, wherein the second voice prompt information is used for acquiring age information of the current use object;
And when a second voice instruction input aiming at the second voice prompt information is received and the second voice instruction indicates the age information of the current use object, generating a special mark version corresponding to the target novel according to the age information of the current use object and the original version corresponding to the target novel, wherein the special mark version corresponding to the target novel comprises the first information content.
3. The method of claim 1, wherein the obtaining the target picture corresponding to the target vocabulary comprises:
acquiring a target picture corresponding to the target vocabulary according to the association relation between the vocabularies and the pictures;
or, calling a search engine to obtain a target picture corresponding to the target vocabulary.
4. The method of claim 1, wherein the obtaining the target information and playing the target information comprises:
when the play record of the first information exists in the play record, acquiring the second information, and playing the second information;
or when the play record of the first information does not exist in the play record, acquiring the first information, and playing the first information.
5. The method according to claim 1, wherein the method further comprises:
and when a first voice command input for the target information is received and the first voice command indicates that the target information is not played, responding to the first voice command, and playing sentences after the sentences corresponding to the target vocabulary.
6. The method of claim 1, wherein when the target vocabulary is played, obtaining a target picture corresponding to the target vocabulary, and displaying the target picture on a second display interface, includes:
when the target vocabulary is played, obtaining a target picture corresponding to the target vocabulary, and outputting third voice prompt information, wherein the third voice prompt information is used for indicating that the target picture exists;
and when a third voice command input for the target picture is received and the third voice command indicates to display the target picture, responding to the third voice command and displaying the target picture on a second display interface.
7. The method of claim 1, wherein when the target vocabulary is played, obtaining a target picture corresponding to the target vocabulary, and displaying the target picture on a second display interface, includes:
And when the target vocabulary is played and the display frequency of the target pictures in the display record is lower than a threshold value, acquiring the target pictures corresponding to the target vocabulary, and displaying the target pictures on a second display interface.
8. The interactive novel picture information processing device is characterized by being applied to an interactive novel service system, wherein the interactive novel service system comprises a server and the interactive novel picture information processing device, and the server is in communication connection with the interactive novel picture information processing device; the picture information processing device in the interactive novel is an intelligent watch; the picture information processing device in the interaction novel comprises:
the display module is used for displaying first information content of the target novel on the first display interface; the first information content is displayed in the first display interface in the form of a voice information strip, and characters corresponding to the first information content are not displayed on the first display interface; the first information content comprises a target vocabulary with special marks, wherein the target vocabulary is a vocabulary except the vocabulary contained in a vocabulary library corresponding to age information of a current use object;
the playing module is used for playing the first information content according to the text sequence corresponding to the first information content;
The acquisition module is used for acquiring a target picture corresponding to the target vocabulary when the target vocabulary is played; the display module is further used for displaying the target picture on a second display interface; the target pictures are used for displaying the target vocabulary;
the processing module is used for outputting first voice prompt information when the sentences corresponding to the target vocabulary are played, wherein the first voice prompt information prompts that target information related to the target vocabulary exists; the target information associated with the target vocabulary comprises first information and/or second information; the first information is used for introducing the target vocabulary, and the second information is used for introducing the target picture;
the acquisition module is further used for responding to the first voice instruction to acquire the target information when the first voice instruction input for the target information is received and the first voice instruction indicates to play the target information; the playing module is also used for playing the target information.
9. A computer device comprising a processor and a memory, the memory storing a computer program which, when executed by the processor, causes the processor to perform the method of any of claims 1-7.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program comprising program instructions which, when executed by a processor, cause a computer device having the processor to perform the method of any of claims 1-7.
CN202310311208.4A 2023-03-28 2023-03-28 Picture information processing method and device in interactive novel Active CN116027946B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310311208.4A CN116027946B (en) 2023-03-28 2023-03-28 Picture information processing method and device in interactive novel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310311208.4A CN116027946B (en) 2023-03-28 2023-03-28 Picture information processing method and device in interactive novel

Publications (2)

Publication Number Publication Date
CN116027946A CN116027946A (en) 2023-04-28
CN116027946B true CN116027946B (en) 2023-07-18

Family

ID=86089626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310311208.4A Active CN116027946B (en) 2023-03-28 2023-03-28 Picture information processing method and device in interactive novel

Country Status (1)

Country Link
CN (1) CN116027946B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107112014A (en) * 2014-12-19 2017-08-29 亚马逊技术股份有限公司 Application foci in voice-based system
CN108776681A (en) * 2018-06-01 2018-11-09 广东小天才科技有限公司 It is a kind of based on phonetic search new word lexicography habit consolidate method and electronic equipment
CN109062944A (en) * 2018-06-21 2018-12-21 广东小天才科技有限公司 A kind of new word word based on phonetic search consolidates method and electronic equipment
CN112489619A (en) * 2020-11-24 2021-03-12 上海传英信息技术有限公司 Voice processing method, terminal device and storage medium
CN113760142A (en) * 2020-09-30 2021-12-07 完美鲲鹏(北京)动漫科技有限公司 Interaction method and device based on virtual role, storage medium and computer equipment

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100233662A1 (en) * 2009-03-11 2010-09-16 The Speech Institute, Llc Method for treating autism spectrum disorders
CN102479237B (en) * 2010-11-30 2014-11-26 成都致远诺亚舟教育科技有限公司 Word associated search and study method and system
US10140292B2 (en) * 2014-08-14 2018-11-27 Avaz, Inc. Device and computerized method for picture based communication
CN105022487A (en) * 2015-07-20 2015-11-04 北京易讯理想科技有限公司 Reading method and apparatus based on augmented reality
CN108549520B (en) * 2018-04-28 2021-11-12 杭州悠书网络科技有限公司 Searching method for current reading content
US11361760B2 (en) * 2018-12-13 2022-06-14 Learning Squared, Inc. Variable-speed phonetic pronunciation machine
CN109710748B (en) * 2019-01-17 2021-04-27 北京光年无限科技有限公司 Intelligent robot-oriented picture book reading interaction method and system
CN110060524A (en) * 2019-04-30 2019-07-26 广东小天才科技有限公司 The method and reading machine people that a kind of robot assisted is read
CN110473436A (en) * 2019-09-09 2019-11-19 邸心洋 A kind of reading assisted learning equipment
US20220093000A1 (en) * 2020-02-29 2022-03-24 Embodied, Inc. Systems and methods for multimodal book reading
CN113361518A (en) * 2021-06-29 2021-09-07 读书郎教育科技有限公司 Method and device for quickly fetching words and searching
CN113610680A (en) * 2021-08-17 2021-11-05 山西传世科技有限公司 AI-based interactive reading material personalized recommendation method and system
CN115220608B (en) * 2022-09-20 2022-12-20 深圳市人马互动科技有限公司 Method and device for processing multimedia data in interactive novel
CN115292543B (en) * 2022-10-10 2022-12-30 深圳市人马互动科技有限公司 Data processing method based on voice interaction novel and related product

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107112014A (en) * 2014-12-19 2017-08-29 亚马逊技术股份有限公司 Application foci in voice-based system
CN108776681A (en) * 2018-06-01 2018-11-09 广东小天才科技有限公司 It is a kind of based on phonetic search new word lexicography habit consolidate method and electronic equipment
CN109062944A (en) * 2018-06-21 2018-12-21 广东小天才科技有限公司 A kind of new word word based on phonetic search consolidates method and electronic equipment
CN113760142A (en) * 2020-09-30 2021-12-07 完美鲲鹏(北京)动漫科技有限公司 Interaction method and device based on virtual role, storage medium and computer equipment
CN112489619A (en) * 2020-11-24 2021-03-12 上海传英信息技术有限公司 Voice processing method, terminal device and storage medium

Also Published As

Publication number Publication date
CN116027946A (en) 2023-04-28

Similar Documents

Publication Publication Date Title
US10937413B2 (en) Techniques for model training for voice features
CN110597962B (en) Search result display method and device, medium and electronic equipment
WO2020103899A1 (en) Method for generating inforgraphic information and method for generating image database
US10854189B2 (en) Techniques for model training for voice features
CN117149989B (en) Training method for large language model, text processing method and device
CN112650842A (en) Human-computer interaction based customer service robot intention recognition method and related equipment
CN111553138B (en) Auxiliary writing method and device for standardizing content structure document
CN113392197A (en) Question-answer reasoning method and device, storage medium and electronic equipment
CN111444321B (en) Question answering method, device, electronic equipment and storage medium
CN112182255A (en) Method and apparatus for storing media files and for retrieving media files
CN108153875B (en) Corpus processing method and device, intelligent sound box and storage medium
CN111931503B (en) Information extraction method and device, equipment and computer readable storage medium
US11036996B2 (en) Method and apparatus for determining (raw) video materials for news
CN113342948A (en) Intelligent question and answer method and device
CN116894188A (en) Service tag set updating method and device, medium and electronic equipment
CN116027946B (en) Picture information processing method and device in interactive novel
CN116958852A (en) Video and text matching method and device, electronic equipment and storage medium
US20230367972A1 (en) Method and apparatus for processing model data, electronic device, and computer readable medium
CN115129858A (en) Test question classification model training method, device, equipment, medium and program product
CN112749553B (en) Text information processing method and device for video file and server
CN114925206A (en) Artificial intelligence body, voice information recognition method, storage medium and program product
CN111222011B (en) Video vector determining method and device
CN116069850A (en) Classroom activity courseware manufacturing method and device, storage medium and electronic equipment
CN111859971A (en) Method, apparatus, device and medium for processing information
CN112818212B (en) Corpus data acquisition method, corpus data acquisition device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant