CN111638787A - Method and device for displaying information - Google Patents

Method and device for displaying information Download PDF

Info

Publication number
CN111638787A
CN111638787A CN202010478782.5A CN202010478782A CN111638787A CN 111638787 A CN111638787 A CN 111638787A CN 202010478782 A CN202010478782 A CN 202010478782A CN 111638787 A CN111638787 A CN 111638787A
Authority
CN
China
Prior art keywords
information
user
age
preset
interaction mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010478782.5A
Other languages
Chinese (zh)
Other versions
CN111638787B (en
Inventor
王乐
刘雅菲
张笑颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Shanghai Xiaodu Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010478782.5A priority Critical patent/CN111638787B/en
Publication of CN111638787A publication Critical patent/CN111638787A/en
Application granted granted Critical
Publication of CN111638787B publication Critical patent/CN111638787B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/22Interactive procedures; Man-machine interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application discloses a method and a device for displaying information, and relates to the field of artificial intelligence. The specific implementation scheme is as follows: determining an age group to which the user belongs based on the acquired information; determining information to be displayed and a standby interaction mode according to the age group of the user; displaying information to the user based on the information to be displayed; and receiving and/or sending information based on the standby interaction mode. The embodiment realizes that the information to be displayed and the standby interactive mode are determined according to the age group to which the user belongs, thereby realizing that different information is displayed and different interactive modes are adopted for the users of different age groups, and the displayed information is more suitable for the development of the users of different age groups.

Description

Method and device for displaying information
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to an artificial intelligence technology.
Background
With the development of artificial intelligence and the increasing change of science and technology, intelligent devices are widely applied in various fields. The combination of the educational field and the intelligent device is becoming deeper and deeper. In practice, smart devices (e.g., smart speakers with screens, reading robots, etc.) need to provide learning resources for groups of different ages. However, since the groups of different age groups have great differences in growth and development, manipulation manner, thought understanding, and the like, the groups of different age groups need to be treated differently.
Disclosure of Invention
The present disclosure provides a method, apparatus, device, and storage medium for presenting information.
According to a first aspect of the present disclosure, an embodiment of the present disclosure provides a method for presenting information, the method including: determining an age group to which the user belongs based on the acquired information; determining information to be displayed and a standby interaction mode according to the age group of the user; displaying information to the user based on the information to be displayed; and receiving and/or sending information based on the standby interaction mode.
According to a second aspect of the present disclosure, an embodiment of the present disclosure provides an apparatus for presenting information, the apparatus including: a first determination unit configured to determine an age group to which the user belongs, based on the acquired information; the second determining unit is configured to determine information to be displayed and a standby interaction mode according to the age group of the user; the display unit is configured to display information to the user based on the information to be displayed; and the interaction unit is configured to receive and/or send information based on the standby interaction mode.
According to a third aspect of the present disclosure, an embodiment of the present disclosure provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method according to any one of the first aspect.
According to a fourth aspect of the present disclosure, a non-transitory computer-readable storage medium storing computer instructions is provided, wherein the computer instructions are configured to cause the computer to perform the method according to any one of the first aspect.
According to the technology of the application, the information to be displayed and the standby interaction mode can be determined according to the age group to which the user belongs, so that different information can be displayed and different interaction modes can be adopted for the users in different age groups, and the displayed information is more suitable for the development of the users in different age groups.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a flow diagram of one embodiment of a method for presenting information in accordance with the present application;
FIG. 2 is a schematic diagram of an application scenario of a method for presenting information according to the present application;
FIG. 3 is a flow diagram of yet another embodiment of a method for presenting information in accordance with the present application;
FIG. 4 is a schematic block diagram of one embodiment of an apparatus for displaying information in accordance with the present application;
FIG. 5 is a block diagram of an electronic device for implementing a method for presenting information according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Referring to fig. 1, a flow 100 of one embodiment of a method for presenting information in accordance with the present disclosure is shown. The method for displaying information comprises the following steps:
s101, determining the age bracket to which the user belongs based on the acquired information.
In the present embodiment, the execution subject of the method for presenting information may acquire various information in various ways. For example, various information input by the user may be acquired through a smart screen. Also for example, image information may also be acquired by an image acquisition device (e.g., a camera). For another example, the voice information may also be obtained according to a voice collecting device. Thereafter, the execution subject may determine the age bracket to which the user belongs, based on the acquired information. In practice, users may be divided into different age groups according to age intervals. For example, users 1-6 years old may be divided into toddler segments, users 7-12 years old may be divided into juvenile segments, and users 13 years old and older may be divided into adult segments.
Here, the execution subject of the method for presenting information may be various electronic devices having a smart screen, including but not limited to a smart phone, a tablet computer, a smart speaker with a screen, a smart learning machine, and the like.
In some optional implementations of this embodiment, the foregoing S101 may specifically be performed as follows:
first, at least one of the following information is acquired: the user inputs age information based on the interface, voice information input by the user and a face image of the user collected by the image collecting equipment.
In this implementation, the execution main body may acquire various information related to the user in various ways. For example, the executing entity may obtain age information input by the user based on the interface, and in the case of a smart screen, the executing entity may obtain age information input by the user based on the interface displayed by the smart screen, such as a specific age value. The execution body can also obtain voice information input by the user, wherein the voice information can be any voice spoken by the user. The execution subject may also obtain a face image of the user acquired by the image acquisition device. In practice, the executing entity may obtain one or more of the above-mentioned various information according to actual needs.
Then, the acquired information is processed, and the age bracket to which the user belongs is determined according to the processing result.
In this implementation, the execution subject may process the acquired information and determine the age bracket to which the user belongs according to the processing result. For example, if the acquired information is age information, the executing body may compare the age information with the divided sections of the respective age groups, and determine the age group to which the user belongs according to the comparison result. If the acquired information is voice information, the execution subject can extract voice features of the voice information and determine the age bracket to which the user belongs according to the extracted features. For example, the execution subject may store the voice features of users in each age group in advance, compare the extracted voice features with the stored voice features of users in each age group, and determine the age group to which the user belongs according to the similarity between the extracted voice features and the voice features of the users in each age group. If the acquired information is the face image of the user, the execution subject may perform user age estimation according to the face image of the user, and determine the age bracket to which the user belongs according to the estimated age.
In practice, the age group to which the user belongs may be determined according to one kind of acquired information, or may be determined according to a plurality of kinds of acquired information. As an example, when the age group to which the user belongs is determined according to the acquired various information, it is possible to perform statistical analysis on a plurality of age groups obtained based on the various information, and finally determine the age group to which the user belongs according to the result of the statistical analysis. By the implementation mode, the age bracket to which the user belongs can be determined according to the acquired one or more information, so that the determined age bracket to which the user belongs can be more accurate.
S102, determining information to be displayed and a standby interaction mode according to the age group of the user.
In this embodiment, for each age group, the corresponding information for presentation may be stored in advance in the execution main body, and a corresponding interaction manner may be set in advance. In this way, the execution subject may determine the information to be presented and the interaction means to be used according to the age bracket determined in S101. As an example, the information to be presented may include various information for presentation to the user on the front end page. For example, the information to be presented may include textual information, image information, textual hyperlinks, picture hyperlinks, information input boxes, and so forth. Here, the standby interactive mode may be used to set a mode of receiving and transmitting information. For example, the inactive interaction mode may specify the mode in which the user inputs information, for example, the page switching command is input by clicking or the page switching mode is input by sliding. It is also possible to specify in which way the information is transmitted by means of the inactive interaction, for example in the form of text, image, speech, etc., and in which tone is used in the speech, etc.
Here, the information to be presented and the interaction mode to be used are different for each age group. As an example, the information to be presented and the interaction manner to be used corresponding to each age may be set according to the user usage characteristics of each age. Here, the usage characteristics of the users in a certain age group may be obtained by performing statistical analysis on usage habits of the users in the age group. For example, the frequency of use of the input modes such as single point, double click, slide, gesture, etc. by the user in the age group when using the touch screen can be statistically analyzed, so as to determine the use characteristics of the user in the age group for the input modes, and further analyze the input mode suitable for the age group.
In practice, since the groups of different age groups have great differences in growth and development, operation modes, thinking and understanding, and the like, different information for display and interaction modes need to be set for different age groups.
And S103, displaying the information to the user based on the information to be displayed.
In this embodiment, the execution subject may present information to the user based on the information to be presented determined in S102. For example, the execution subject may directly present the information to be presented to the user through the display screen.
S104, receiving and/or sending information based on the standby interactive mode.
In this embodiment, the execution principal may receive and/or transmit information according to the inactive interaction mode determined in S102.
With continued reference to fig. 2, fig. 2 is a schematic diagram of an application scenario of the method for presenting information according to the present embodiment. In the application scenario of fig. 2, the smart terminal device 201 first determines that the age group to which the user belongs is a young child based on the acquired information. Then, the intelligent terminal 201 may determine the information to be displayed and the interaction mode to be used according to the age group "infant group" to which the user belongs. Then, the smart terminal 201 may present information to the user based on the information to be presented, and accept and/or send information based on the interaction means to be presented. It should be noted that the information displayed by the intelligent terminal device in fig. 2 is only schematic, and is not limited to the information to be displayed corresponding to the baby segment. In practice, the information to be displayed corresponding to the infant section can be set according to actual needs.
The method provided by the embodiment of the disclosure can determine the information to be displayed and the standby interaction mode according to the age group to which the user belongs, thereby realizing displaying different information for users of different age groups and adopting different interaction modes, and enabling the displayed information to be more suitable for the development of the users of different age groups.
With further reference to FIG. 3, a flow 300 of yet another embodiment of a method for presenting information is shown. The process 300 of the method for presenting information includes the steps of:
s301, based on the acquired information, determines the age bracket to which the user belongs.
In the present embodiment, the execution subject of the method for presenting information may first acquire various information in various ways. For example, various information input by the user may be acquired through a smart screen. Also for example, the image information may be acquired by an image acquisition device (e.g., a camera). As another example, voice information may be obtained from a voice capture device. Thereafter, the execution subject may determine the age bracket to which the user belongs, based on the acquired information. Here, the age group to which the user belongs may include a young child and a young child.
S302, in response to the fact that the age group to which the user belongs is determined to be the infant group, information for display preset for the infant group is used as information to be displayed, and an interaction mode preset for the infant group is used as an interaction mode to be used.
In this embodiment, if it is determined that the age group to which the user belongs is the infant group, the execution main body may take the information for presentation set in advance for the infant group as the information to be presented, and may take the interaction means set in advance for the infant group as the interaction means to be used. Here, the information for presentation preset for the baby segment may be various information, for example, text, picture, video, text hyperlink, picture hyperlink, information input box, and the like. In practice, pages for displaying information to be displayed can be set according to the visual development characteristics of users in different age groups. For example, according to the visual development characteristics and preferences of the users in the infant group, the pages corresponding to the infant group are bright-colored and high-purity colors, so that the visual development of the users in the infant group is facilitated, and the position of information is convenient to remember.
In some optional implementation manners of this embodiment, the information for display preset for the baby segment includes a first voice input identifier, and the interaction manner preset for the baby segment includes click input, first speech speed voice output, and first timbre voice output.
In this implementation, the information for display preset for the baby segment may include the first voice input identifier. The first voice input identifier may be used to guide the baby stage user to perform voice information input, for example, after clicking or pressing the first voice input identifier, the voice information input may be performed. In practice, a logo associated with the brand image may be designed as the voice input logo. In addition, voice input identification can be set according to the growth and development characteristics of users in different age groups. Since the user is thought like a baby, a specific mark can be set as the first voice input mark.
Here, since the child segment user generally uses simple operations such as clicking and pressing, it is difficult to use more complicated operations such as sliding, multi-point interaction, and gesture interaction. Thus, the preset interaction means for the baby segment may include click input. For example, a toddler segment user may use a simple click input to enter a page turn instruction. In addition, since the speech comprehension capability of the user in the baby segment is limited, the preset interaction mode for the baby segment may include a first speech rate speech output, where the first speech rate may be lower than a preset speech rate threshold. In addition, in order to bring a sense of intimacy to the baby segment user, the interaction mode preset for the baby segment may include a first timbre voice output. Wherein the first timbre may be a timbre of a younger child. Through the implementation mode, the information for display and the interaction mode which accord with the characteristics of the infant user can be preset, so that the displayed information and the displayed interaction mode are more suitable for the development of the infant user.
And S303, in response to the fact that the age group to which the user belongs is determined to be a child group, using the information for display preset for the child group as information to be displayed, and using the interaction mode preset for the child group as an interaction mode to be used.
In this embodiment, if it is determined that the age group to which the user belongs is the juvenile group, the execution main body may use information for presentation, which is set in advance for the juvenile group, as information to be presented, and may use an interaction means, which is set in advance for the juvenile group, as an interaction means to be used. As an example, a page corresponding to the kid segment for displaying information to be displayed may use a dark background in a space style, so that the information is more focused and more immersive, and therefore, a kid segment user may efficiently select from a plurality of information to quickly enter information learning.
In some optional implementation manners of this embodiment, the information for presentation preset for the kid segment includes a second voice input identifier, and the interaction manner preset for the kid segment includes a click input, a slide input, a gesture input, a second speech speed voice output, and a second timbre voice output.
In this implementation, the information for presentation preset for the juvenile segment may include the second voice input identifier. The second voice input identifier can be used for guiding the user of the juvenile segment to perform voice information input, for example, after clicking or pressing the second voice input identifier, voice information input can be performed. In practice, the kid segment user can perform abstract association compared with the kid segment user, and therefore, an abstract identifier can be set as the second voice input identifier.
Here, since the kid level user can generally grasp more complicated operations such as sliding, multi-point interaction, gesture interaction, and the like. Accordingly, the preset interaction means for the juvenile segment may include a click input, a slide input, a gesture input, and the like. For example, the kid segment user may use a sliding input to enter a page turn instruction. In addition, since the user of the juvenile segment already has a strong speech understanding capability, the interaction mode preset for the juvenile segment may include a second speech rate speech output, where the second speech rate may be higher than a preset speech rate threshold. In addition, the preset interactive mode for the juvenile segment can comprise a second tone color voice output. Wherein the second timbre may be a adult-biased timbre. Through the implementation mode, the information for display and the interaction mode which accord with the characteristics of the user in the kid section can be preset, so that the displayed information and the displayed interaction mode are more suitable for the development of the user in the kid section.
It should be noted that the first and second embodiments of the present application are only used to distinguish different information, and are not limited to the information itself.
S304, displaying the information to the user based on the information to be displayed.
In this embodiment, S304 is similar to S103 of the embodiment shown in fig. 1, and is not described here again.
S305, receiving and/or sending information based on the standby interactive mode.
In this embodiment, S305 is similar to S104 of the embodiment shown in fig. 1, and is not described here again.
As can be seen from fig. 3, compared with the embodiment corresponding to fig. 1, the flow 300 of the method for presenting information in the present embodiment highlights steps of presenting different information and adopting different interaction manners for the toddler segment user and the kid segment user. Therefore, the scheme described in the embodiment can enable the displayed information and the adopted interactive mode to be more suitable for the development of the infant users and the kid users.
With further reference to fig. 4, as an implementation of the methods shown in the above-mentioned figures, the present disclosure provides an embodiment of an apparatus for presenting information, which corresponds to the method embodiment shown in fig. 1, and which is particularly applicable to various electronic devices.
As shown in fig. 4, the apparatus 400 for displaying information of the present embodiment includes: a first determining unit 401, a second determining unit 402, a presenting unit 403 and an interacting unit 404. Wherein the first determination unit 401 is configured to determine the age bracket to which the user belongs, based on the acquired information; the second determining unit 402 is configured to determine the information to be displayed and the interaction mode to be used according to the age group to which the user belongs; the presentation unit 403 is configured to present information to the user based on the information to be presented; the interaction unit 404 is configured to receive and/or transmit information based on the inactive interaction mode described above.
In this embodiment, specific processing of the first determining unit 401, the second determining unit 402, the presenting unit 403, and the interacting unit 404 of the apparatus 400 for presenting information and technical effects brought by the specific processing can refer to related descriptions of S101, S102, S103, and S104 in the corresponding embodiment of fig. 2, which are not described herein again.
In some optional implementations of this embodiment, the age groups to which the user belongs include a toddler segment and a kids segment; and the second determining unit 402 is further configured to: in response to the fact that the age group to which the user belongs is determined to be a child group, taking information for display preset for the child group as information to be displayed, and taking an interaction mode preset for the child group as an interaction mode to be used; and in response to the fact that the age group to which the user belongs is determined to be a child group, taking the information for display preset for the child group as information to be displayed, and taking the interaction mode preset for the child group as an interaction mode to be used.
In some optional implementation manners of this embodiment, the information for display preset for the baby segment includes a first voice input identifier, and the interaction manner preset for the baby segment includes click input, first speech speed voice output, and first timbre voice output.
In some optional implementation manners of this embodiment, the information for presentation preset for the kid segment includes a second voice input identifier, and the interaction manner preset for the kid segment includes a click input, a slide input, a gesture input, a second speech speed voice output, and a second timbre voice output.
In some optional implementations of the present embodiment, the first determining unit 401 is further configured to: acquiring at least one of the following information: the method comprises the following steps that age information input by a user based on an interface, voice information input by the user and a user face image collected by image collecting equipment; and processing the acquired information, and determining the age bracket to which the user belongs according to the processing result.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 5, is a block diagram of an electronic device for a method of presenting information according to an embodiment of the application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 5, the electronic apparatus includes: one or more processors 501, memory 502, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 5, one processor 501 is taken as an example.
Memory 502 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by at least one processor to cause the at least one processor to perform the methods for presenting information provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method for presenting information provided herein.
The memory 502, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the method for presenting information in the embodiments of the present application (e.g., the first determining unit 401, the second determining unit 402, the presenting unit 403, and the interacting unit 404 shown in fig. 4). The processor 501 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and modules stored in the memory 502, that is, implements the method for presenting information in the above method embodiments.
The memory 502 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device for presenting information, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 502 optionally includes memory located remotely from processor 501, which may be connected via a network to an electronic device for presenting information. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device for the method of presenting information may further comprise: an input device 503 and an output device 504. The processor 501, the memory 502, the input device 503 and the output device 504 may be connected by a bus or other means, and fig. 5 illustrates the connection by a bus as an example.
The input device 503 may receive input numeric or character information and generate key signal inputs related to user settings and function controls of the electronic apparatus for presenting the information, such as an input device such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or the like. The output devices 504 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the information to be displayed and the standby interaction mode can be determined according to the age group to which the user belongs, so that different information can be displayed and different interaction modes can be adopted for the users in different age groups, and the displayed information is more suitable for the development of the users in different age groups.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (12)

1. A method for presenting information, comprising:
determining an age group to which the user belongs based on the acquired information;
determining information to be displayed and a standby interaction mode according to the age group of the user;
displaying information to the user based on the information to be displayed;
receiving and/or sending information based on the dormant interaction mode.
2. The method of claim 1, wherein the age groups to which the user belongs include a young child and a young child; and
the determining the information to be displayed and the standby interaction mode according to the age group of the user comprises the following steps:
in response to the fact that the age group to which the user belongs is determined to be a child group, taking preset information for display of the child group as information to be displayed, and taking a preset interaction mode for the child group as a standby interaction mode;
and in response to the fact that the age group to which the user belongs is determined to be a child group, taking information for display preset for the child group as information to be displayed, and taking an interaction mode preset for the child group as an interaction mode to be used.
3. The method of claim 2, wherein the information for display preset for the baby segment comprises a first voice input identifier, and the interaction mode preset for the baby segment comprises a click input, a first speech speed voice output and a first tone voice output.
4. The method according to claim 2, wherein the information for presentation preset for the kid segment comprises a second voice input identification, and the interaction mode preset for the kid segment comprises a click input, a slide input, a gesture input, a second speech speed voice output and a second tone color voice output.
5. The method of claim 1, wherein determining the age bracket to which the user belongs based on the obtained information comprises:
acquiring at least one of the following information: the method comprises the following steps that age information input by a user based on an interface, voice information input by the user and a user face image collected by image collecting equipment;
and processing the acquired information, and determining the age bracket to which the user belongs according to the processing result.
6. An apparatus for presenting information, comprising:
a first determination unit configured to determine an age group to which the user belongs, based on the acquired information;
the second determining unit is configured to determine information to be displayed and a standby interaction mode according to the age group of the user;
a presentation unit configured to present information to the user based on the information to be presented;
an interaction unit configured to receive and/or transmit information based on the inactive interaction mode.
7. The apparatus of claim 6, wherein the age groups to which the user belongs include a young child and a young child; and
the second determination unit is further configured to:
in response to the fact that the age group to which the user belongs is determined to be a child group, taking preset information for display of the child group as information to be displayed, and taking a preset interaction mode for the child group as a standby interaction mode;
and in response to the fact that the age group to which the user belongs is determined to be a child group, taking information for display preset for the child group as information to be displayed, and taking an interaction mode preset for the child group as an interaction mode to be used.
8. The apparatus of claim 7, wherein the information for display preset for the baby segment comprises a first voice input identifier, and the interaction mode preset for the baby segment comprises a click input, a first speech speed voice output and a first tone voice output.
9. The device of claim 7, wherein the information for presentation preset for the kid segment comprises a second voice input identifier, and the interaction mode preset for the kid segment comprises a click input, a slide input, a gesture input, a second speech speed voice output and a second tone color voice output.
10. The apparatus of claim 6, wherein the first determining unit is further configured to:
acquiring at least one of the following information: the method comprises the following steps that age information input by a user based on an interface, voice information input by the user and a user face image collected by image collecting equipment;
and processing the acquired information, and determining the age bracket to which the user belongs according to the processing result.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
12. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-5.
CN202010478782.5A 2020-05-29 2020-05-29 Method and device for displaying information Active CN111638787B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010478782.5A CN111638787B (en) 2020-05-29 2020-05-29 Method and device for displaying information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010478782.5A CN111638787B (en) 2020-05-29 2020-05-29 Method and device for displaying information

Publications (2)

Publication Number Publication Date
CN111638787A true CN111638787A (en) 2020-09-08
CN111638787B CN111638787B (en) 2023-09-01

Family

ID=72332847

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010478782.5A Active CN111638787B (en) 2020-05-29 2020-05-29 Method and device for displaying information

Country Status (1)

Country Link
CN (1) CN111638787B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112163888A (en) * 2020-09-30 2021-01-01 成都新潮传媒集团有限公司 Advertisement matching method, device and storage medium
CN112543390A (en) * 2020-11-25 2021-03-23 南阳理工学院 Intelligent infant sound box and interaction method thereof

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090070286A1 (en) * 2007-09-11 2009-03-12 Yahoo! Inc. Social Network Site Including Interactive Digital Objects
US20090167685A1 (en) * 2007-10-11 2009-07-02 Leapfrog Enterprises, Inc. Method and system for providing a computer environment for children
CN104102409A (en) * 2013-04-12 2014-10-15 三星电子(中国)研发中心 Scenario adaptation device and method for user interface
CN104735144A (en) * 2015-03-20 2015-06-24 努比亚技术有限公司 Method for changing state of terminal based on big data and server
US20150293594A1 (en) * 2014-04-10 2015-10-15 Disney Enterprises, Inc. System and Method for Real-Time Age Profiling
CN106936988A (en) * 2017-02-27 2017-07-07 深圳市相位科技有限公司 It is a kind of by mobile phone A PP control keyboards and the method and software of mouse function
US10033973B1 (en) * 2017-01-25 2018-07-24 Honeywell International Inc. Systems and methods for customizing a personalized user interface using face recognition
CN109074435A (en) * 2015-12-09 2018-12-21 三星电子株式会社 For providing the electronic equipment and method of user information
US10346129B1 (en) * 2016-06-08 2019-07-09 Google Llc Gamifying voice search experience for children
CN110109596A (en) * 2019-05-08 2019-08-09 芋头科技(杭州)有限公司 Recommended method, device and the controller and medium of interactive mode
WO2019190817A1 (en) * 2018-03-26 2019-10-03 Hergenroeder Alex Lauren Method and apparatus for speech interaction with children
CN110502112A (en) * 2019-08-14 2019-11-26 北京金山安全软件有限公司 Intelligent recommendation method and device, electronic equipment and storage medium
CN110801625A (en) * 2018-08-06 2020-02-18 南京芝兰人工智能技术研究院有限公司 Method and system for interaction of children games
US20200075024A1 (en) * 2018-08-30 2020-03-05 Baidu Online Network Technology (Beijing) Co., Ltd. Response method and apparatus thereof
US20200088463A1 (en) * 2018-09-18 2020-03-19 Samsung Electronics Co., Ltd. Refrigerator and method of controlling thereof
US20200111488A1 (en) * 2018-10-09 2020-04-09 International Business Machines Corporation Analytics-based speech therapy
CN110989889A (en) * 2019-12-20 2020-04-10 联想(北京)有限公司 Information display method, information display device and electronic equipment

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090070286A1 (en) * 2007-09-11 2009-03-12 Yahoo! Inc. Social Network Site Including Interactive Digital Objects
US20090167685A1 (en) * 2007-10-11 2009-07-02 Leapfrog Enterprises, Inc. Method and system for providing a computer environment for children
CN104102409A (en) * 2013-04-12 2014-10-15 三星电子(中国)研发中心 Scenario adaptation device and method for user interface
US20150293594A1 (en) * 2014-04-10 2015-10-15 Disney Enterprises, Inc. System and Method for Real-Time Age Profiling
CN104735144A (en) * 2015-03-20 2015-06-24 努比亚技术有限公司 Method for changing state of terminal based on big data and server
CN109074435A (en) * 2015-12-09 2018-12-21 三星电子株式会社 For providing the electronic equipment and method of user information
US10346129B1 (en) * 2016-06-08 2019-07-09 Google Llc Gamifying voice search experience for children
US10033973B1 (en) * 2017-01-25 2018-07-24 Honeywell International Inc. Systems and methods for customizing a personalized user interface using face recognition
CN106936988A (en) * 2017-02-27 2017-07-07 深圳市相位科技有限公司 It is a kind of by mobile phone A PP control keyboards and the method and software of mouse function
WO2019190817A1 (en) * 2018-03-26 2019-10-03 Hergenroeder Alex Lauren Method and apparatus for speech interaction with children
CN110801625A (en) * 2018-08-06 2020-02-18 南京芝兰人工智能技术研究院有限公司 Method and system for interaction of children games
US20200075024A1 (en) * 2018-08-30 2020-03-05 Baidu Online Network Technology (Beijing) Co., Ltd. Response method and apparatus thereof
US20200088463A1 (en) * 2018-09-18 2020-03-19 Samsung Electronics Co., Ltd. Refrigerator and method of controlling thereof
US20200111488A1 (en) * 2018-10-09 2020-04-09 International Business Machines Corporation Analytics-based speech therapy
CN110109596A (en) * 2019-05-08 2019-08-09 芋头科技(杭州)有限公司 Recommended method, device and the controller and medium of interactive mode
CN110502112A (en) * 2019-08-14 2019-11-26 北京金山安全软件有限公司 Intelligent recommendation method and device, electronic equipment and storage medium
CN110989889A (en) * 2019-12-20 2020-04-10 联想(北京)有限公司 Information display method, information display device and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112163888A (en) * 2020-09-30 2021-01-01 成都新潮传媒集团有限公司 Advertisement matching method, device and storage medium
CN112543390A (en) * 2020-11-25 2021-03-23 南阳理工学院 Intelligent infant sound box and interaction method thereof
CN112543390B (en) * 2020-11-25 2023-03-24 南阳理工学院 Intelligent infant sound box and interaction method thereof

Also Published As

Publication number Publication date
CN111638787B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
CN111221984B (en) Multi-mode content processing method, device, equipment and storage medium
US11928432B2 (en) Multi-modal pre-training model acquisition method, electronic device and storage medium
CN110020411B (en) Image-text content generation method and equipment
CN111225236B (en) Method and device for generating video cover, electronic equipment and computer-readable storage medium
CN112533041A (en) Video playing method and device, electronic equipment and readable storage medium
US20210049354A1 (en) Human object recognition method, device, electronic apparatus and storage medium
US11423907B2 (en) Virtual object image display method and apparatus, electronic device and storage medium
US20220027575A1 (en) Method of predicting emotional style of dialogue, electronic device, and storage medium
EP3890294A1 (en) Method and apparatus for extracting hotspot segment from video
KR102358012B1 (en) Speech control method and apparatus, electronic device, and readable storage medium
CN110727668A (en) Data cleaning method and device
EP3796308A1 (en) Speech recognition control method and apparatus, electronic device and readable storage medium
CN111327958A (en) Video playing method and device, electronic equipment and storage medium
CN110675873A (en) Data processing method, device and equipment of intelligent equipment and storage medium
CN111582477A (en) Training method and device of neural network model
CN111638787B (en) Method and device for displaying information
CN112114926A (en) Page operation method, device, equipment and medium based on voice recognition
CN112383825B (en) Video recommendation method and device, electronic equipment and medium
US20210098012A1 (en) Voice Skill Recommendation Method, Apparatus, Device and Storage Medium
US20220328076A1 (en) Method and apparatus of playing video, electronic device, and storage medium
CN109643245B (en) Execution of task instances associated with at least one application
WO2022228433A1 (en) Information processing method and apparatus, and electronic device
CN113778595A (en) Document generation method and device and electronic equipment
CN110604918B (en) Interface element adjustment method and device, storage medium and electronic equipment
CN111352685A (en) Input method keyboard display method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210512

Address after: 100085 Baidu Building, 10 Shangdi Tenth Street, Haidian District, Beijing

Applicant after: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) Co.,Ltd.

Applicant after: Shanghai Xiaodu Technology Co.,Ltd.

Address before: 100085 Baidu Building, 10 Shangdi Tenth Street, Haidian District, Beijing

Applicant before: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant