CN110989889A - Information display method, information display device and electronic equipment - Google Patents

Information display method, information display device and electronic equipment Download PDF

Info

Publication number
CN110989889A
CN110989889A CN201911329356.9A CN201911329356A CN110989889A CN 110989889 A CN110989889 A CN 110989889A CN 201911329356 A CN201911329356 A CN 201911329356A CN 110989889 A CN110989889 A CN 110989889A
Authority
CN
China
Prior art keywords
information
displayed
display
scene
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911329356.9A
Other languages
Chinese (zh)
Inventor
于晨晨
符博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201911329356.9A priority Critical patent/CN110989889A/en
Publication of CN110989889A publication Critical patent/CN110989889A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure provides an information display method, including: receiving an information display instruction, wherein the information display instruction is specific to information to be displayed; responding to the information display instruction, and acquiring scene information; determining a to-be-displayed form of the to-be-displayed information based on the scene information, wherein the file format of the to-be-displayed information corresponds to different display forms under different scenes; and displaying the information to be displayed based on the form to be displayed. The disclosure also provides an information display device and an electronic device.

Description

Information display method, information display device and electronic equipment
Technical Field
The disclosure relates to an information display method, an information display device and an electronic device.
Background
With the rapid development of automatic control, communication and computer technologies, more and more attention is paid to convenience and interaction effect of information interaction between users and between human and machines.
In the process of realizing the concept of the present disclosure, the inventor finds that in the prior art, at least the following problems exist, and the convenience and effect of information interaction by a user cannot meet the user requirements.
Disclosure of Invention
One aspect of the present disclosure provides an information display method for improving convenience and effect of information interaction, including: receiving an information display instruction, wherein the information display instruction is specific to information to be displayed; responding to the information display instruction, and acquiring scene information; determining a to-be-displayed form of the to-be-displayed information based on the scene information, wherein the file format of the to-be-displayed information corresponds to different display forms under different scenes; and displaying the information to be displayed based on the form to be displayed.
According to the information display method provided by the embodiment of the disclosure, different display modes of the information to be displayed are determined according to different scenes, so that the convenience of user interaction can be effectively improved, and the interestingness of information interaction is favorably improved due to various display modes, and the purpose of improving the interaction effect is further achieved.
Optionally, the displaying the information to be displayed based on the form to be displayed may include the following operations: determining information corresponding to the to-be-displayed form from the to-be-displayed information file based on the to-be-displayed form, wherein the to-be-displayed information file comprises information of multiple display forms aiming at the to-be-displayed information; and displaying the information corresponding to the form to be displayed.
Optionally, the displaying the information to be displayed based on the form to be displayed may include the following operations: generating information of the information to be displayed aiming at the form to be displayed; and displaying the information of the information to be displayed aiming at the information of the form to be displayed.
Optionally, the generating of the information to be displayed for the information in the form to be displayed may include the following operations: and generating the information of the information to be displayed aiming at the form to be displayed based on at least one of voice synthesis, image synthesis, voice recognition and semantic understanding.
Optionally, the to-be-displayed form includes a text form, an image form and a voice form; the scene information comprises environmental information; and the determining the to-be-displayed form of the to-be-displayed information based on the scene information may include the following operations: and determining the to-be-displayed form of the to-be-displayed information based on the current environment information.
Optionally, the context information includes user attribute information; and the determining the to-be-displayed form of the to-be-displayed information based on the scene information may include the following operations: and determining the to-be-displayed form of the to-be-displayed information based on the current user attribute information.
Optionally, the method may further comprise the operations of: after the to-be-displayed form of the to-be-displayed information is determined based on the scene information, if a plurality of selectable to-be-displayed forms are determined to exist, displaying the plurality of selectable to-be-displayed forms; receiving user operation, wherein the user operation is specific to the plurality of selectable forms to be displayed; and responding to the user operation, and determining a display form from the plurality of selectable forms to be displayed. Correspondingly, the displaying the information to be displayed based on the form to be displayed may include the following operations: and displaying the information to be displayed based on the display form.
Another aspect of the present disclosure provides an information display apparatus, including an information display instruction receiving module, a scene information obtaining module, a to-be-displayed form determining module, and an information display module. The information display instruction receiving module is used for receiving an information display instruction, and the information display instruction is specific to information to be displayed; the scene information acquisition module is used for responding to the information display instruction and acquiring scene information; the to-be-displayed form determining module is used for determining the to-be-displayed form of the to-be-displayed information based on the scene information, wherein the file format of the to-be-displayed information corresponds to different display forms under different scenes; and the information display module is used for displaying the information to be displayed based on the form to be displayed.
Optionally, the information display module includes: the display information determining unit and the first display unit. The display information determining unit is used for determining information corresponding to the to-be-displayed form from the to-be-displayed information file based on the to-be-displayed form, wherein the to-be-displayed information file comprises information of multiple display forms aiming at the to-be-displayed information; and the first display unit is used for displaying the information corresponding to the to-be-displayed form.
Optionally, the information display module includes: the display information generating unit and the second display unit. The display information generating unit is used for generating information of the information to be displayed aiming at the form to be displayed; and the second display unit is used for displaying the information of the information to be displayed aiming at the information of the form to be displayed.
Optionally, the display information generating unit is specifically configured to generate information of the information to be displayed for the form to be displayed based on at least one of speech synthesis, image synthesis, speech recognition, and semantic understanding.
Optionally, the to-be-displayed form includes a text form, an image form and a voice form; the scene information includes environmental information. Correspondingly, the to-be-displayed form determining module is specifically configured to determine the to-be-displayed form of the to-be-displayed information based on the current environment information.
Optionally, the context information includes user attribute information. Correspondingly, the to-be-displayed form determining module is specifically configured to determine the to-be-displayed form of the to-be-displayed information based on the current user attribute information.
Optionally, the apparatus further comprises: the device comprises a form display module, an operation receiving module and a display form determining module. The form display module is used for displaying a plurality of selectable forms to be displayed if the plurality of selectable forms to be displayed exist; the operation receiving module is used for receiving user operation, and the user operation is specific to the plurality of selectable forms to be displayed; the display form determining module is used for responding to the user operation and determining a display form from the plurality of selectable forms to be displayed; and the information display module is specifically used for displaying the information to be displayed based on the display form.
Another aspect of the present disclosure provides an electronic device including: one or more processors, computer readable storage media, for storing one or more computer programs which, when executed by the processors, implement the methods as described above.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the method as described above when executed.
Another aspect of the disclosure provides a computer program comprising computer executable instructions for implementing the method as described above when executed.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
fig. 1 schematically illustrates an application scenario of an information presentation method, an information presentation apparatus, and an electronic device according to an embodiment of the present disclosure;
fig. 2 schematically shows a system architecture of an applicable information presentation method, information presentation apparatus, and electronic device according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a flow chart of an information presentation method according to an embodiment of the present disclosure;
fig. 4 schematically illustrates a schematic diagram of an information presentation method for a noisy call scenario according to an embodiment of the present disclosure;
fig. 5 schematically illustrates a schematic diagram of an information presentation method of a child learning scenario according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram schematically illustrating an information presentation method of a received incoming call scenario in operation according to an embodiment of the present disclosure;
fig. 7 schematically illustrates a schematic diagram of an information presentation method for reading a text information scene for a comic fan according to an embodiment of the present disclosure;
FIG. 8 schematically illustrates a flow chart of an information presentation method according to another embodiment of the present disclosure;
FIG. 9 schematically illustrates a block diagram of an information presentation device according to an embodiment of the present disclosure; and
FIG. 10 schematically shows a block diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Some block diagrams and/or flow diagrams are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations thereof, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, which execute via the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks. The techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). In addition, the techniques of this disclosure may take the form of a computer program product on a computer-readable storage medium having instructions stored thereon for use by or in connection with an instruction execution system.
The embodiment of the disclosure provides an information display method, an information display device and electronic equipment. The method comprises a display form determining process and a display process. In the display form determining process, receiving an information display instruction, wherein the information display instruction is specific to information to be displayed, then responding to the information display instruction, acquiring scene information, and then determining the form to be displayed of the information to be displayed based on the scene information, wherein the file format of the information to be displayed corresponds to different display forms under different scenes. And after the display form determining process is finished, entering a display process, and displaying the information to be displayed based on the form to be displayed.
Fig. 1 schematically shows an application scenario of an information presentation method, an information presentation apparatus, and an electronic device according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a scenario in which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, but does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the file format of the information to be displayed corresponds to different display forms in different scenes, such as a document display form, a presentation display form, a music display form, a cartoon display form, an animation display form, and the like.
The embodiment of the disclosure automatically switches information expression modes (such as voice, characters, pictures and the like) according to scenes. Specifically, the presentation form of the file to which the information to be displayed belongs can be automatically changed according to the scene, for example, when a noisy scene is encountered, the information can be displayed in the form of characters, pictures and the like instead of sound, so that the trouble that a user cannot hear clearly in a noisy environment is avoided. For another example, if the display object of the information to be displayed includes a child, the display object can be played in an animation mode to improve the acceptance of the child on the information to be displayed. For another example, if the current user is determined to be a cartoon fan, the corresponding cartoon information exists in the information to be displayed, or the cartoon information of the information to be displayed is automatically generated, so that the information to be displayed is displayed in a cartoon form, and the user experience is improved.
Fig. 2 schematically shows a system architecture of an applicable information presentation method, information presentation apparatus, and electronic device according to an embodiment of the present disclosure.
As shown in fig. 2, the system architecture 200 according to this embodiment may include terminal devices 201, 202, 203, a network 204 and a server 205. The network 204 may include a plurality of gateways, routers, hubs, network wires, etc. to provide a medium for communication links between the end devices 201, 202, 203 and the server 205. Network 204 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user can use the terminal devices 201, 202 and 203 to interact with other terminal devices and the server 205 through the network 204 to receive or send information and the like, such as making a voice call, chatting, sending a short message, sending an information request, receiving a processing result and the like. The terminal devices 201, 202, 203 may be installed with various communication client applications, such as a bank application, a business development application, a monitoring application, a web browser application, a search application, an office application, an instant messaging tool, a mailbox client, social platform software, and the like (for example only).
The terminal devices 201, 202, 203 include, but are not limited to, smart phones, virtual reality devices, augmented reality devices, tablets, laptop computers, and the like.
The server 205 may receive the request and process the request. For example, the server 205 may be a back office management server, a cluster of servers, or the like. The background management server may analyze the received information request instruction and feed back a processing result (such as requested information, for example, voice information generated based on text, animation information, and the like) to the terminal device.
It should be noted that the building method provided by the embodiments of the present disclosure may be generally executed by the terminal devices 201, 202, and 203. Accordingly, the building apparatus provided by the embodiment of the present disclosure may be generally disposed in the server 205. The building method provided by the embodiment of the present disclosure may also be executed by a terminal device different from the server 205 and capable of communicating with the terminal devices 201, 202, 203 and/or the server 205.
It should be understood that the number of terminal devices, networks, and servers are merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 3 schematically shows a flow chart of an information presentation method according to an embodiment of the present disclosure.
As shown in fig. 3, the method includes operations S301 to S307.
In operation S301, an information presentation instruction for information to be presented is received.
In this embodiment, the information presentation instruction may be input by the user, or may be an instruction determined by the electronic device according to a specific flow.
For example, the information display instruction for viewing the information to be displayed is input by the user, such as the user double-clicking to open a file to view the information to be displayed, the user touching a component displayed on the touch screen to view the information to be displayed, and the like.
For another example, in response to receiving information to be displayed, such as a voice signal, text information, and the like, the information display instruction is triggered according to a preset flow (for example, when the information to be displayed is received, a flow of displaying the information is performed).
Wherein, the information to be displayed includes but is not limited to: text information, voice information, image information, video information, and the like.
In operation S303, scene information is acquired in response to the information presentation instruction.
In this embodiment, the scene information may be acquired by various sensors, may be acquired by a user input mode, and may be determined according to the received information.
For example, sound information is collected through a microphone to determine whether the current scene is a noisy scene, light intensity information is collected through a photoelectric sensor to determine whether the current scene is a bright light scene or a weak light scene, image information is collected through a camera to determine whether the current scene is a child use scene, and the like.
For another example, when receiving information to be displayed or receiving an information display instruction, the prompt information is output to promote the user to input the scene information, for example, the scene information is obtained through the state of a control in the human-computer interaction interface.
Also for example, the scene information is received through a network, such as the internet, a local area network, etc.
The scene information can represent the current scene, so that the to-be-displayed form suitable for the user is determined based on the scene information.
In operation S305, a to-be-presented form of the to-be-presented information is determined based on the scene information, wherein a file format of the to-be-presented information corresponds to different presentation forms in different scenes.
In this embodiment, the to-be-presented form adapted to the current scene information may be determined based on the mapping relationship. For example, a scene in which a child views information to be displayed may adopt an animation display form, a scene in which a highlight scene is provided may adopt a voice display form, a scene in which a low-light scene is provided may adopt a voice display form and a low-brightness text display form, a scene in which a low-power scene is provided may adopt a low-brightness high-contrast display form, a conference scene may adopt a silent display form such as a text and an image, a working scene may adopt a short message display form, and a scene in which a cartoon lover is provided may adopt a cartoon display form.
In one embodiment, the forms to be presented include text forms, image forms, and voice forms. The scene information includes environmental information.
Correspondingly, the determining the to-be-presented form of the to-be-presented information based on the scene information may include the following operations: and determining the to-be-displayed form of the to-be-displayed information based on the current environment information.
For example, when the decibel value in the current environment exceeds the specified decibel value, the text form and/or the image form are/is used as the form to be displayed.
In another embodiment, the context information includes user attribute information. The user attribute information includes but is not limited to: age, gender, hobbies, timbre, current character attributes, occupation, etc.
Correspondingly, the determining the to-be-presented form of the to-be-presented information based on the scene information may include: and determining the to-be-displayed form of the to-be-displayed information based on the current user attribute information.
For example, the form to be presented is determined based on at least one of age, gender, hobbies, timbre, current character attributes, occupation, and the like. For example, when the client used by the child receives the information to be displayed, it may determine that the animation format is the format to be displayed, or the child voice broadcast format is the format to be displayed. For another example, when the current scene of the user is a parent role, the collected voice information may be used as the information to be displayed of the child, and when the current scene of the user is a role that needs to protect the privacy of the user, the voice information synthesized by the designated tone may be used as the information to be displayed of the interactive counterpart.
In operation S307, the information to be shown is shown based on the form to be shown.
In this embodiment, information display can be performed through a terminal of a user, such as displaying image information, animation information, and the like through a display screen, playing voice information, sound information, and the like through a speaker, and playing video and audio information and the like through a display screen and a speaker.
In one embodiment, information (such as audio information, text information, image information, or the like) adapted to the current scene may be directly obtained from a file of information to be presented for presentation. Correspondingly, the presenting the information to be presented based on the form to be presented may include the following operations.
Firstly, determining information corresponding to the to-be-displayed form from the to-be-displayed information file based on the to-be-displayed form, wherein the to-be-displayed information file comprises information of multiple display forms aiming at the to-be-displayed information. And then, displaying the information corresponding to the to-be-displayed form.
Specifically, the information to be presented received by the client is a file with a specific format, where the file includes multiple files corresponding to the information to be presented, such as a text file, an audio file (which may have multiple audio files with different timbres), an image file, an animation file, a video file, and the like, and each file corresponds to at least one piece of scene information. Therefore, the file matched with the current scene information can be directly selected from the files in the specific format for displaying based on the current scene information. The method has the advantage of high response speed.
In another embodiment, information adapted to the current scene information may be generated by the client or the request server based on the received information to be presented, so as to facilitate presentation based on the information to be presented. Correspondingly, the presenting the information to be presented based on the form to be presented may include the following operations.
Firstly, generating information of the information to be displayed aiming at the form to be displayed, and then displaying the information of the information to be displayed aiming at the form to be displayed.
Specifically, the generating of the information to be displayed for the information of the form to be displayed includes: and generating the information of the information to be displayed aiming at the form to be displayed based on at least one of voice synthesis, image synthesis, voice recognition and semantic understanding.
In one embodiment, the information that the information to be presented is adapted to the current scene information may be generated by the client based on technical means such as artificial intelligence. For example, synthesized speech information corresponding to the text to be presented may be generated with a particular timbre through speech synthesis techniques. For another example, text information corresponding to the voice information may be obtained by voice recognition. For another example, image information corresponding to the text to be presented may be generated by an image synthesis technique. Also for example, a caricature or the like may be synthesized based on particular caricature style characteristics and text to be presented.
In another embodiment, semantic information can be obtained by performing semantic understanding on the text to be presented, and then searching (such as local search or online search) to find information which is matched with the semantic information and is matched with the form to be presented. For example, when the user is a fan of the caucasian cartoon and receives the information to be shown, which has the semantic meaning of only one correct answer, the user can search the caucasian cartoon, which includes the cartoon image with only one true answer, in the network to obtain the relevant cartoon, so that the cartoon image can be shown to the fan of the caucasian cartoon to improve the interaction effect.
Fig. 4 schematically shows a schematic diagram of an information presentation method for a noisy call scene according to an embodiment of the present disclosure.
As illustrated in fig. 4, in this scenario, the following operations may be included.
The client of user B receives the call connection request. And responding to the received call connection request, and displaying the call mode, wherein the call mode can comprise a preset mode (such as a common answering mode, a text answering mode and the like). Receiving a first user operation, wherein the first user operation is specific to the call mode so as to determine the adopted call mode. For example, currently in a noisy scene, a text listening mode may be selected. And establishing a call channel (which can be the same as the call channel established by the prior art, such as establishing a call channel based on communication information of 2G, 3G, 4G and the like, or establishing a call channel of a communication protocol of chat software of WeChat and the like) in response to the preset mode serving as the current call mode. And receiving or sending voice information based on the call channel. The voice information is generated based on the voice short message of the user or the text information input by the user.
As shown in fig. 4, after a call channel is established between the user a and the user B, the user a sends a voice "see where? ", the client of user B exhibits the speech" where is seen? "and displaying the text information on the client of the user B, so as to avoid that the voice information of the user A is not heard clearly due to a noisy environment. User B enters "how do the country trade? "if the corresponding voice information synthesized based on the text information is broadcasted to the user a through the communication channel, or" how do the country trade "is the voice inputted by the user? "broadcast to the user a through the communication channel, or directly say" how do the country trade? ". By the method, the interaction effect among users can be effectively improved.
It should be noted that, when synthesizing the voice information, the voice information may be synthesized based on the tone selected by the user or the tone of the user, so as to further enhance the interest of the interaction or reduce the risk of the leakage of the privacy information.
In one embodiment, receiving or sending voice information based on the talk channel comprises: the method comprises the steps of firstly, receiving user voice information and/or text information input by a user, then generating voice information to be broadcasted corresponding to the user voice information and/or the text information input by the user, and then broadcasting the voice information to be broadcasted based on a communication channel.
In another embodiment, receiving or sending voice information based on the talk channel may include the following operations. Receiving a voice signal from a call counterpart, and then generating first associated information of the voice signal, wherein the first associated information comprises at least one of the following: the voice recognition method comprises the steps of obtaining voice information corresponding to the voice signal, text information corresponding to the voice signal, identification information of the voice information corresponding to the voice signal and identification information of the text information corresponding to the voice signal. Then, the first associated information is displayed. Then, a second user operation is received, wherein the second user operation is used for generating reply information of the first associated information, and the reply information comprises voice information and/or text information. And then, acquiring the voice information to be broadcasted corresponding to the reply information. And then broadcasting the voice information to be broadcasted based on the communication channel.
And generating voice information to be broadcasted corresponding to the text information based on the tone information. For example, a category of a requestor of the call connection request is determined, the category including: a privacy-preserving class and a general class. And for the privacy protection category, generating voice information to be broadcasted based on the tone different from the tone of the user and the text information. And for the common category, generating voice information to be broadcasted based on the own tone of the user and the text information. Therefore, the safety of the user privacy information can be effectively improved.
In another embodiment, after the first associated information is displayed, timing is performed to obtain a waiting time, and if the waiting time exceeds a preset waiting time, preset voice information is broadcasted. When the waiting time is too long, the prompt message is automatically broadcasted to the other party of the call, so that the situation that the other party of the call hangs up the call for signal interruption due to the fact that the text input time is long can be improved.
Fig. 5 schematically illustrates a schematic diagram of an information presentation method of a child learning scenario according to an embodiment of the present disclosure.
As shown in fig. 5, when a father (user a) and a child (user B) make a call, the user a speaks the voice information "multi-eating point", and after receiving the voice information, the client of the user B first determines that the current scene is a child interaction scene, and determines that the appropriate to-be-displayed form is an animation display form. Then, voice recognition and semantic understanding are carried out on the voice information of the user A, and the semantic meaning of the user A is 'eat more', image information with animation style (such as scene animation for eating by a plurality of cartoon characters) can be generated, text information such as 'eat more' and 'treasure eat more' and the like and synthesized voice information (which can be synthesized based on preset tones and the like such as the tone of a parent, the tone of a parent or the tone of a cartoon character liked by the user B) can be matched. By displaying the animation and playing the voice, the user B is more willing to accept and adopt the voice suggestion of the user A, and the interaction effect is improved. In addition, parents who work outside for a long time can feel more love, and physical and psychological development of children is facilitated.
Fig. 6 schematically shows a schematic diagram of an information presentation method of a received call scene during operation according to an embodiment of the present disclosure.
As shown in fig. 6, the user a is a user B who is sending a courier, is working (e.g., in a meeting, handling an emergency, etc.), and can select to use a voice call or a text call to communicate with the user a for a call request (e.g., a stranger call, or a call request known as a pickup notification call, etc.) that is not required for work. If the text communication mode is selected, the user B may display the voice short message of the user a in the interactive interface (e.g., segmenting the voice message by pausing and sending the voice short message in segments), and may also display the text message corresponding to the voice short message so as to facilitate the user to read. User B may enter a voice message or enter text information. And for the recorded voice short message, voice broadcasting can be carried out through the communication channel. For the input text information, after voice synthesis is carried out based on the designated tone, voice broadcasting is carried out through a communication channel. Therefore, the experience of the user A is that the user B carries out voice communication with the user B, and the user B can also carry out information interaction with the user A in an information interaction mode suitable for the self requirement under the current scene.
In one embodiment, the user may record his or her own timbre in advance, or use any timbre selected from the group consisting of Guo Debx, Lingxing, etc. If not, there will be a default tone in the file. When calling or receiving, for example, the opposite party is the distributor, the display interface of the user's mobile phone is shown in fig. 6. When the distributor calls the user, if the text calling mode is selected. The user terminal can not directly play voice, the interface shown in fig. 6 can be displayed, voice information spoken by the distribution personnel can be displayed, a voice short message can be displayed on the client of the user, the voice short message can be clicked to listen, recognized characters can be displayed, and the user can directly check the character information. If the user inputs the text information, the text information is converted into voice information (for example, the voice information is generated based on the set tone or the temporarily set tone), and the voice information is broadcasted to the distribution personnel through the communication channel.
Fig. 7 schematically shows a schematic diagram of an information presentation method for reading a text information scene for a comic fan according to an embodiment of the present disclosure.
As shown in fig. 7, user a is a comic fan and user a is reading a novel that has been adapted with a corresponding comic. User A reads section XX, "AA looks at … … on the computer screen". The client of the user A can search related cartoons on the Internet based on the content, and the cartoons are displayed on the client of the user A in a mode preset by the user A, so that the reading experience of the user A is improved. In addition, the cartoon may be automatically generated based on semantic information based on an artificial intelligence technology, which is not limited herein.
Fig. 8 schematically shows a flow chart of an information presentation method according to another embodiment of the present disclosure.
As shown in fig. 8, after performing operation S305 and determining the to-be-displayed form of the to-be-displayed information based on the scene information, the method may further include operations S801 to S805.
In operation S801, if it is determined that there are a plurality of selectable forms to be presented, the plurality of selectable forms to be presented are presented. Such as displaying a variety of selectable presentation forms in the interactive interface for selection by the user.
In operation S803, a user operation is received, where the user operation is for the plurality of selectable to-be-presented forms. For example, a user may select a desired presentation form from a plurality of selectable presentation forms to be presented by clicking or the like.
In operation S805, a presentation form is determined from the plurality of selectable forms to be presented in response to the user operation.
Correspondingly, the displaying the information to be displayed based on the form to be displayed comprises: and displaying the information to be displayed based on the display form.
In this embodiment, when there are a plurality of optional forms to be displayed, the user may be prompted to select by himself, the degree of freedom of selection of the user is prompted, and the experience of the user is further improved.
Fig. 9 schematically shows a block diagram of an information presentation apparatus according to an embodiment of the present disclosure.
As shown in fig. 9, the information presentation apparatus 900 may include: the information display system comprises an information display instruction receiving module 910, a scene information acquiring module 920, a to-be-displayed form determining module 930 and an information display module 940.
The information display instruction receiving module 910 is configured to receive an information display instruction, where the information display instruction is for information to be displayed.
The scene information obtaining module 920 is configured to obtain scene information in response to the information displaying instruction.
The to-be-displayed form determining module 930 is configured to determine the to-be-displayed form of the to-be-displayed information based on the scene information, where the file format of the to-be-displayed information corresponds to different display forms in different scenes.
The information display module 940 is configured to display the information to be displayed based on the form to be displayed.
In one embodiment, the information presentation module 940 may include: the display information determining unit and the first display unit.
The display information determining unit is used for determining information corresponding to the to-be-displayed form from the to-be-displayed information file based on the to-be-displayed form, wherein the to-be-displayed information file comprises information of multiple display forms aiming at the to-be-displayed information.
The first display unit is used for displaying the information corresponding to the to-be-displayed form.
In another embodiment, the information presentation module 940 may include: the display information generating unit and the second display unit.
The display information generating unit is used for generating the information of the information to be displayed aiming at the information of the form to be displayed.
The second display unit is used for displaying the information of the information to be displayed aiming at the information of the form to be displayed.
For example, the presentation information generating unit is specifically configured to generate the information of the information to be presented for the form to be presented based on at least one of speech synthesis, image synthesis, speech recognition, and semantic understanding.
In a specific embodiment, the form to be presented includes a text form, an image form and a voice form; the scene information includes environmental information. Accordingly, the to-be-shown form determining module 930 is specifically configured to determine the to-be-shown form of the to-be-shown information based on the current environment information.
In another specific embodiment, the context information includes user attribute information. Accordingly, the to-be-displayed form determining module 930 is specifically configured to determine the to-be-displayed form of the to-be-displayed information based on the current user attribute information.
Furthermore, the apparatus 900 may further include: the device comprises a form display module, an operation receiving module and a display form determining module.
The form display module is used for displaying a plurality of selectable forms to be displayed if the plurality of selectable forms to be displayed exist.
The operation receiving module is used for receiving user operation, and the user operation is specific to the plurality of selectable forms to be displayed.
And the display form determining module is used for responding to the user operation and determining a display form from the plurality of selectable forms to be displayed.
Correspondingly, the information display module 940 is specifically configured to display the information to be displayed based on the display form.
According to the embodiment of the disclosure, the to-be-displayed form determining module 930 determines the currently applicable to-be-displayed form based on the scene information, so that the to-be-displayed information is displayed based on the to-be-displayed form, and the interaction convenience is effectively improved. For the relevant contents of receiving the information display instruction, obtaining the scene information, and determining the to-be-displayed form, reference may be made to the above description, which is not repeated here.
Any of the modules, units, or at least part of the functionality of any of them according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules and units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, units according to the embodiments of the present disclosure may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by any other reasonable means of hardware or firmware by integrating or packaging the circuits, or in any one of three implementations of software, hardware and firmware, or in any suitable combination of any of them. Alternatively, one or more of the modules, units according to embodiments of the present disclosure may be implemented at least partly as computer program modules, which, when executed, may perform the respective functions.
For example, any plurality of the information presentation instruction receiving module 910, the scene information acquiring module 920, the to-be-presented form determining module 930, and the information presentation module 940 may be combined in one module to be implemented, or any one of the modules may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to the embodiment of the present disclosure, at least one of the information presentation instruction receiving module 910, the scene information acquiring module 920, the to-be-presented form determining module 930, and the information presentation module 940 may be at least partially implemented as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or implemented by any one of three implementation manners of software, hardware, and firmware, or implemented by a suitable combination of any several of them. Alternatively, at least one of the information presentation instruction receiving module 910, the scene information acquiring module 920, the to-be-presented form determining module 930, and the information presentation module 940 may be at least partially implemented as a computer program module, which may perform a corresponding function when executed.
FIG. 10 schematically shows a block diagram of an electronic device according to an embodiment of the disclosure. The server shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 10, the server 1000 includes: one or more processors 1010 and a computer-readable storage medium 1020. The server may perform a method according to an embodiment of the present disclosure.
In particular, processor 1010 may include, for example, a general purpose microprocessor, an instruction set processor and/or related chip set and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), and/or the like. The processor 1010 may also include on-board memory for caching purposes. Processor 1010 may be a single processing unit or multiple processing units for performing different acts of a method flow according to embodiments of the disclosure.
Computer-readable storage media 1020, for example, may be non-volatile computer-readable storage media, specific examples including, but not limited to: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); memory such as Random Access Memory (RAM) or flash memory, etc.
The computer-readable storage medium 1020 may include a program 1021, which program 1021 may include code/computer-executable instructions that, when executed by the processor 1010, cause the processor 1010 to perform a method according to an embodiment of the disclosure, or any variation thereof.
The program 1021 may be configured with computer program code, for example, comprising computer program modules. For example, in an example embodiment, code in program 1021 may include one or more program modules, including for example program module 1021A, program modules 1021B, … …. It should be noted that the division and number of the program modules are not fixed, and those skilled in the art may use suitable program modules or program module combinations according to actual situations, and when the program modules are executed by the processor 1010, the processor 1010 may execute the method according to the embodiment of the present disclosure or any variation thereof.
According to embodiments of the present disclosure, the processor 1010 may interact with the computer readable storage medium 1020 to perform a method according to embodiments of the present disclosure or any variant thereof.
According to an embodiment of the present disclosure, at least one of the information presentation instruction receiving module 910, the scene information acquiring module 920, the to-be-presented form determining module 930, and the information presentation module 940 may be implemented as a program module described with reference to fig. 10, which, when executed by the processor 1010, may implement the corresponding operations described above.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
While the disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents. Accordingly, the scope of the present disclosure should not be limited to the above-described embodiments, but should be defined not only by the appended claims, but also by equivalents thereof.

Claims (10)

1. An information display method, comprising:
receiving an information display instruction, wherein the information display instruction is specific to information to be displayed;
responding to the information display instruction, and acquiring scene information;
determining a to-be-displayed form of the to-be-displayed information based on the scene information, wherein the file format of the to-be-displayed information corresponds to different display forms under different scenes; and
and displaying the information to be displayed based on the form to be displayed.
2. The method of claim 1, wherein the presenting the information to be presented based on the form to be presented comprises:
determining information corresponding to the to-be-displayed form from the to-be-displayed information file based on the to-be-displayed form, wherein the to-be-displayed information file comprises information of multiple display forms aiming at the to-be-displayed information; and
and displaying the information corresponding to the to-be-displayed form.
3. The method of claim 1, wherein the presenting the information to be presented based on the form to be presented comprises:
generating information of the information to be displayed aiming at the form to be displayed; and
and displaying the information to be displayed aiming at the information in the form to be displayed.
4. The method of claim 3, wherein the generating the information to be presented for the information of the form to be presented comprises: and generating the information of the information to be displayed aiming at the form to be displayed based on at least one of voice synthesis, image synthesis, voice recognition and semantic understanding.
5. The method of claim 1, wherein:
the to-be-displayed form comprises a text form, an image form and a voice form;
the scene information comprises environmental information; and
the determining the to-be-displayed form of the to-be-displayed information based on the scene information comprises: and determining the to-be-displayed form of the to-be-displayed information based on the current environment information.
6. The method of claim 1, wherein:
the scene information comprises user attribute information; and
the determining the to-be-displayed form of the to-be-displayed information based on the scene information comprises: and determining the to-be-displayed form of the to-be-displayed information based on the current user attribute information.
7. The method of claim 1, further comprising: after determining the to-be-presented form of the to-be-presented information based on the scene information,
if the plurality of selectable to-be-displayed forms exist, displaying the plurality of selectable to-be-displayed forms;
receiving user operation, wherein the user operation is specific to the plurality of selectable forms to be displayed;
in response to the user operation, determining a display form from the plurality of selectable forms to be displayed; and
the displaying the information to be displayed based on the form to be displayed comprises: and displaying the information to be displayed based on the display form.
8. An information presentation device comprising:
the information display instruction receiving module is used for receiving an information display instruction, and the information display instruction is specific to information to be displayed;
the scene information acquisition module is used for responding to the information display instruction and acquiring scene information;
the to-be-displayed form determining module is used for determining the to-be-displayed form of the to-be-displayed information based on the scene information, wherein the file format of the to-be-displayed information corresponds to different display forms under different scenes; and
and the information display module is used for displaying the information to be displayed based on the form to be displayed.
9. The apparatus of claim 8, further comprising:
the form display module is used for displaying a plurality of selectable forms to be displayed if the plurality of selectable forms to be displayed exist;
the operation receiving module is used for receiving user operation, and the user operation is specific to the plurality of selectable forms to be displayed;
the display form determining module is used for responding to the user operation and determining a display form from the plurality of selectable forms to be displayed; and
the information display module is specifically used for displaying the information to be displayed based on the display form.
10. An electronic device, comprising:
one or more processors;
storage means for storing executable instructions which, when executed by the processor, implement the method of any one of claims 1 to 7.
CN201911329356.9A 2019-12-20 2019-12-20 Information display method, information display device and electronic equipment Pending CN110989889A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911329356.9A CN110989889A (en) 2019-12-20 2019-12-20 Information display method, information display device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911329356.9A CN110989889A (en) 2019-12-20 2019-12-20 Information display method, information display device and electronic equipment

Publications (1)

Publication Number Publication Date
CN110989889A true CN110989889A (en) 2020-04-10

Family

ID=70073823

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911329356.9A Pending CN110989889A (en) 2019-12-20 2019-12-20 Information display method, information display device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110989889A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111625308A (en) * 2020-04-28 2020-09-04 北京字节跳动网络技术有限公司 Information display method and device and electronic equipment
CN111638787A (en) * 2020-05-29 2020-09-08 百度在线网络技术(北京)有限公司 Method and device for displaying information
CN113741897A (en) * 2021-09-06 2021-12-03 北京字节跳动网络技术有限公司 Question list generation method, device, equipment and storage medium
WO2024093443A1 (en) * 2022-10-31 2024-05-10 北京字跳网络技术有限公司 Information display method and apparatus based on voice interaction, and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108984098A (en) * 2018-07-12 2018-12-11 北京达佳互联信息技术有限公司 The control method and device that information based on social software is shown
CN109754816A (en) * 2017-11-01 2019-05-14 北京搜狗科技发展有限公司 A kind of method and device of language data process
CN110109596A (en) * 2019-05-08 2019-08-09 芋头科技(杭州)有限公司 Recommended method, device and the controller and medium of interactive mode
CN110164427A (en) * 2018-02-13 2019-08-23 阿里巴巴集团控股有限公司 Voice interactive method, device, equipment and storage medium
CN110488973A (en) * 2019-07-23 2019-11-22 清华大学 A kind of virtual interactive message leaving system and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109754816A (en) * 2017-11-01 2019-05-14 北京搜狗科技发展有限公司 A kind of method and device of language data process
CN110164427A (en) * 2018-02-13 2019-08-23 阿里巴巴集团控股有限公司 Voice interactive method, device, equipment and storage medium
CN108984098A (en) * 2018-07-12 2018-12-11 北京达佳互联信息技术有限公司 The control method and device that information based on social software is shown
CN110109596A (en) * 2019-05-08 2019-08-09 芋头科技(杭州)有限公司 Recommended method, device and the controller and medium of interactive mode
CN110488973A (en) * 2019-07-23 2019-11-22 清华大学 A kind of virtual interactive message leaving system and method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111625308A (en) * 2020-04-28 2020-09-04 北京字节跳动网络技术有限公司 Information display method and device and electronic equipment
CN111638787A (en) * 2020-05-29 2020-09-08 百度在线网络技术(北京)有限公司 Method and device for displaying information
CN111638787B (en) * 2020-05-29 2023-09-01 百度在线网络技术(北京)有限公司 Method and device for displaying information
CN113741897A (en) * 2021-09-06 2021-12-03 北京字节跳动网络技术有限公司 Question list generation method, device, equipment and storage medium
CN113741897B (en) * 2021-09-06 2024-05-28 抖音视界有限公司 Question list generation method, device, equipment and storage medium
WO2024093443A1 (en) * 2022-10-31 2024-05-10 北京字跳网络技术有限公司 Information display method and apparatus based on voice interaction, and electronic device

Similar Documents

Publication Publication Date Title
US10176808B1 (en) Utilizing spoken cues to influence response rendering for virtual assistants
CN110989889A (en) Information display method, information display device and electronic equipment
US9652113B1 (en) Managing multiple overlapped or missed meetings
JP7391913B2 (en) Parsing electronic conversations for presentation in alternative interfaces
US20170277993A1 (en) Virtual assistant escalation
US10621681B1 (en) Method and device for automatically generating tag from a conversation in a social networking website
CN107644646B (en) Voice processing method and device for voice processing
US20200098358A1 (en) Presenting contextually appropriate responses to user queries by a digital assistant device
US20210409787A1 (en) Techniques for providing interactive interfaces for live streaming events
CN113873195B (en) Video conference control method, device and storage medium
CN111989939A (en) Method and system for managing media content associated with a message context on a mobile computing device
US20180081514A1 (en) Attention based alert notification
US9369587B2 (en) System and method for software turret phone capabilities
CN110379406B (en) Voice comment conversion method, system, medium and electronic device
US9706055B1 (en) Audio-based multimedia messaging platform
US10938918B2 (en) Interactively updating multimedia data
US10437437B1 (en) Method and device for appending information in a conversation in a voice based networking website
US20190068663A1 (en) Cognitive Headset Awareness with External Voice Interruption Detection
JP2019145944A (en) Acoustic output system, acoustic output method, and program
EP2680256A1 (en) System and method to analyze voice communications
CN111158838B (en) Information processing method and device
CN110366002B (en) Video file synthesis method, system, medium and electronic device
US9565298B1 (en) Method and device for appending information in a conversation in a voice based networking website
CN110493473A (en) Method, equipment and the computer storage medium of caller identification
TWI817213B (en) Method, system, and computer readable record medium to record conversations in connection with video communication service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination