CN110737334A - information output method, device, terminal and computer readable storage medium - Google Patents
information output method, device, terminal and computer readable storage medium Download PDFInfo
- Publication number
- CN110737334A CN110737334A CN201910956118.4A CN201910956118A CN110737334A CN 110737334 A CN110737334 A CN 110737334A CN 201910956118 A CN201910956118 A CN 201910956118A CN 110737334 A CN110737334 A CN 110737334A
- Authority
- CN
- China
- Prior art keywords
- prompt information
- facial expression
- terminal
- target
- face image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention discloses information output methods, devices, terminals and computer readable storage media, wherein the method comprises the steps of obtaining a face image by a terminal, identifying the face expression in the face image, obtaining corresponding target prompt information according to the face expression, and outputting the target prompt information, wherein the target prompt information comprises care words and/or recommended control application information.
Description
Technical Field
The present invention relates to the field of computer technologies, and in particular, to information output methods, apparatuses, terminals, and computer-readable storage media.
Background
With the rapid development of electronic technology and internet technology, people enjoy and rely on terminals such as smart phones and tablet computers more and more strongly. The terminal can make corresponding response according to the operation input by the user and provide corresponding service.
However, in the existing terminal, the terminal can only respond correspondingly if the user needs manual input operation, and when the user does not manually input operation, the terminal cannot intelligently identify the service function required by the user, so the intelligence of the terminal is low.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present invention is to provide information output methods, so that the terminal can make a corresponding response according to the expression of the user, thereby increasing the intelligence of the terminal.
In order to solve the above technical problem, an th aspect of the embodiment of the present invention discloses a information output method, including:
the method comprises the steps that a terminal obtains a face image and identifies face expressions in the face image;
acquiring corresponding target prompt information according to the facial expression;
and outputting the target prompt information, wherein the target prompt information comprises care words and/or recommendation control application information.
The second aspect of the embodiment of the present invention discloses kinds of information output devices, including:
the recognition module is used for acquiring a face image by a terminal and recognizing the facial expression in the face image;
the acquisition module is used for acquiring corresponding target prompt information according to the facial expression;
and the output module is used for outputting the target prompt information, and the target prompt information comprises care words and/or recommended control application information.
In a third aspect of the embodiments of the present invention, terminals are disclosed, which include a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, where the memory is used to store a computer program, and the computer program includes program instructions, and the processor is configured to call the program instructions to execute the method of the aspect.
A fourth aspect of the present invention discloses computer-readable storage media, wherein the computer storage media stores a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the method of the above aspect.
In the embodiment of the invention, the terminal acquires the face image, identifies the face expression in the face image, and in step , acquires the corresponding target prompt information according to the face expression and outputs the target prompt information, wherein the target prompt information comprises care words and/or recommendation control application information.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
FIG. 1 is a flow chart of information output methods according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of another information output methods provided by the embodiments of the present invention;
FIG. 3 is a flow chart of another information output methods provided by the embodiment of the invention;
fig. 4 is a schematic structural diagram of information output devices provided by the embodiment of the invention;
fig. 5 is a schematic structural diagram of terminals provided in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention.
Referring to fig. 1, a schematic flow chart of information output methods according to an embodiment of the present invention is shown, where the information output method described in this embodiment includes the following steps:
s101: the terminal acquires the face image and identifies the facial expression in the face image.
In the embodiment of the invention, the terminal may be an electronic device such as a smart phone, a tablet computer, etc., and the facial expression may be "yawning", "crying", "laughing", "eye closing", etc. in a specific implementation, the terminal may start a camera when detecting a preset trigger condition, acquire a facial image of a user operating the terminal, and recognize the facial expression in the facial image, where the preset trigger condition may be that the current time belongs to a preset time period, specifically, before the terminal acquires the facial image and recognizes the facial expression in the facial image, the terminal detects whether the current time is within a preset time range, where the preset time range may be different time periods, such as a noon break time range 12:00-14:00, or a evening time range 22:00-2:00, etc., if the terminal detects that the current time is any time period within the preset time range, the terminal starts the camera to acquire the facial image of the user operating the terminal, and recognizes the facial image of the facial image in the facial image, and the terminal may start a target operation mode of the camera, and the terminal may be an application program, where the terminal may be an application program that the terminal starts to recognize the facial image, and the current facial image is a target facial image, and the face image, and the image may be acquired by a target operation mode of the camera, and the camera application program, and the terminal may be a target application program, and may be a video application program, where the preset image is set when the terminal is a target image, and a target image, the terminal may be a video application program.
In implementations, a specific way of recognizing a facial expression by a terminal may be that the terminal acquires a facial image of a user operating the terminal through a camera, detects whether the acquired facial image matches any reference facial images in a preset database, and if so, determines an expression corresponding to a reference facial image matching the acquired facial image as a facial expression corresponding to the acquired facial image.
In implementation manners, a terminal may also recognize a facial expression based on a camera and a facial expression recognition model, specifically, the terminal obtains a facial image input by a user through the camera, and the facial expression recognition model determines a facial expression corresponding to the facial image.
S102: and acquiring corresponding target prompt information according to the facial expression.
In the embodiment of the invention, after the terminal acquires the face image and identifies the facial expression in the face image, the terminal determines the target prompt information corresponding to the facial expression according to the corresponding relation between the expression and the prompt information and acquires the target prompt information, wherein the target prompt information comprises care words and/or recommendation control application information.
In implementation manners, the correspondence between the expressions and the prompt information may be preset by the system, specifically, different expressions are modeled by using big data learning, different expressions can be automatically corresponded to after face recognition, and then the prompt information is obtained, the correspondence between the expressions and the prompt information may also be set by the user, specifically, the user enters "yawning" corresponding yawning expressions, "crying" corresponding crying expressions, "laugh" and "closed eyes" corresponding to various expressions in advance, and corresponding prompt information is entered for various expressions to establish the correspondence between the expressions and the prompt information, for example, the correspondence between the face expressions and the prompt information set by the user is shown in table 1.
Table 1:
when the terminal detects that the facial expression of the user operating the terminal is "yawning", the caring phrase corresponding to the facial expression may be determined to be "baby, and it takes a break in the morning" and the recommendation control application information shown in table 1 may be obtained, when the terminal detects that the facial expression of the user operating the terminal is "crying", the caring phrase corresponding to the facial expression may be determined to be "tomorrow will be a new days" and the recommendation control application information shown in table 1, when the terminal detects that the facial expression of the user operating the terminal is "laugh", the caring phrase corresponding to the facial expression may be determined to be "happy feeling" and the recommendation control application information shown in table 1.
In implementations, the specific way for the terminal to determine the target prompt information corresponding to the facial expression according to the correspondence between the facial expression and the prompt information may be that the terminal detects whether the facial expression matches any reference facial expressions stored in a preset database, and if it is detected that the facial expression matches any reference facial expressions stored in the preset database, searches for the prompt information corresponding to the reference facial expression that matches the facial expression stored in the preset database, where the preset database stores at least reference facial expressions that are configured in advance, the prompt information input for each of the at least reference facial expressions, and the correspondence between the facial expression and the prompt information.
S103: and outputting target prompt information, wherein the target prompt information comprises care words and/or recommendation control application information.
In the embodiment of the invention, the terminal determines the target prompt information corresponding to the facial expression according to the corresponding relation between the expression and the prompt information, and then outputs the prompt information corresponding to the target expression.
Specifically, the specific way of the terminal outputting the prompt information corresponding to the target expression may be that the terminal detects a currently running application program, acquires a prompt information output way corresponding to the application program, and outputs the target prompt information in the prompt information output way corresponding to the application program. The different application programs correspond to different prompt information output modes, the prompt information output mode can be a text information output mode, a picture information output mode and a voice information output mode, the text prompt information output mode and the picture information output mode can be automatic popping up on a terminal screen, sliding in above the terminal screen, sliding in below the terminal screen and the like, and prompt information can be output at any position of the terminal screen.
For example, the application currently running by the terminal is a game application, and the prompt information output mode corresponding to the game application is to display text prompt information on the upper right corner of the game application interface, so that when the terminal runs the game application and detects a yawning expression input by the user, the terminal can display a caring phrase and recommended control application information on the upper right corner of the game application interface to prompt the user.
For another example, the application currently running by the terminal is a video application, and the prompt information output mode corresponding to the video application is to display picture prompt information at the upper right corner of the video application interface, so that when the terminal runs the video application and detects a "yawning" expression input by the user, the terminal can display picture prompt information at the upper right corner of the video application interface, where the picture prompt information may include care terms and recommendation control application information to prompt the user.
In the embodiment of the invention, the terminal acquires the face image, identifies the face expression in the face image, and in step , acquires the corresponding target prompt information according to the face expression, and outputs the target prompt information, wherein the target prompt information comprises care words and/or recommendation control application information.
Please refer to fig. 2, which is a flowchart illustrating another information output methods according to an embodiment of the present invention, the information output method described in the embodiment includes the following steps:
s201: and detecting whether the current time is within a preset time range.
In the embodiment of the present invention, the terminal may be an electronic device such as a smart phone and a tablet computer, and the terminal may detect whether the current time is within a preset time range, where the preset time range may be different time periods, such as a noon break time range 12:00-14:00, or a night time range 22:00-2:00, and if the current time is detected to be any time period within the preset time range, step s202 is executed, and if the current time is not within the preset time range, the process is ended, where the time range may be set by a user, or the terminal may determine, according to an operation habit of the user and a normal work and rest time period, specifically, an overlapping time period of the frequently-used operation period of the mobile phone by the user and the rest time period may be determined as the preset time range.
S202: the terminal acquires the face image and identifies the facial expression in the face image.
In the embodiment of the invention, the facial expression can be yawning, crying, laughing, eye closing and the like. The terminal can start the camera when detecting a preset trigger condition, acquire a face image of a user operating the terminal, and recognize a face expression in the face image. The preset triggering condition can be an operation instruction input by a user, specifically, the user can input an operation of starting the expression recognition mode, after the terminal detects the operation, the expression recognition mode is started, the face image of the user operating the terminal is obtained in real time through the camera, and the face expression corresponding to the face image is analyzed. Or, the preset trigger condition may be that the application currently running on the terminal is a target application, where the target application may be a game program, a music program, a video program, or the like, and may be specifically preset by a system or set by a user, and when the terminal detects that the application currently running is the target application, the terminal starts a camera to acquire a face image of the user operating the terminal, and recognizes a facial expression in the face image.
In implementations, the terminal may recognize the facial expression in a manner that the terminal obtains a facial image of a user operating the terminal through a camera, detects whether the obtained facial image matches any reference facial images in a preset database, and if so, determines an expression corresponding to the reference facial image matching the obtained facial image as the facial expression corresponding to the obtained facial image.
In implementation manners, the manner of recognizing the facial expression by the terminal may also be that the facial expression is recognized based on a camera and a facial expression recognition model, specifically, the terminal acquires a facial image input by the user through the camera, and the facial expression recognition model determines the facial expression corresponding to the facial image.
S203: and detecting whether the object corresponding to the facial expression is matched with the target object or not, and acquiring the corresponding relation between the expression corresponding to the target object and the prompt information.
In the embodiment of the invention, after the terminal identifies the facial expression input by the terminal, whether the object corresponding to the facial expression is matched with a target object is detected, wherein the target object can be children, adults and old people, and if the object corresponding to the detected facial expression is matched with the target object, the corresponding relation between the expression corresponding to the target object and the prompt message is obtained.
In implementation manners, the target object is a child, and when it is detected by the terminal that the object corresponding to the facial expression input by the user is a child, it is determined that the object corresponding to the facial expression matches the target object, and a corresponding relationship between the expression corresponding to the target object and the prompt information is obtained, for example, the corresponding relationship between the expression corresponding to the target object and the prompt information is shown in table 2.
Table 2:
then, when the terminal detects that the facial expression of the user operating the terminal is "defaulting" and the object corresponding to the facial expression is a child, it may be determined that the care term corresponding to the facial expression is "baby," and the user should take a break in the morning, so that the user has a high "and the recommendation control application information shown in table 2, and when the terminal detects that the facial expression of the user operating the terminal is" defaulting "and the object corresponding to the facial expression is an adult, it may be determined that the care term corresponding to the facial expression is" owner, the break in the morning, the user should take a break in the morning "and the recommendation control application information shown in table 2. When the terminal detects that the facial expression of the user operating the terminal is 'yawning' and the object corresponding to the facial expression is the old, it can be determined that the caring phrase corresponding to the expression is 'early-asleep early-up, body stick' and the recommended control application information shown in table 2.
S204: and acquiring corresponding target prompt information according to the facial expression.
In the embodiment of the invention, after the terminal identifies the facial expression input by the terminal, the target prompt information corresponding to the facial expression is determined according to the corresponding relation between the expression and the prompt information.
In implementation manners, the specific manner of determining the target prompt information corresponding to the facial expression by the terminal according to the corresponding relationship between the facial expression and the prompt information may be that the terminal detects whether the facial expression matches any reference facial expressions stored in a preset database, and if it is detected that the facial expression matches any reference facial expressions stored in the preset database, searches for the prompt information corresponding to the reference facial expression matching the facial expression stored in the preset database, where the preset database stores at least reference facial expressions configured in advance, the prompt information input for each of the at least reference facial expressions, and the corresponding relationship between the facial expression and the prompt information.
S205: and outputting target prompt information, wherein the target prompt information comprises care words and/or recommendation control application information.
In the embodiment of the invention, the terminal determines the target prompt information corresponding to the facial expression according to the corresponding relation between the expression and the prompt information, and then outputs the prompt information corresponding to the target expression.
In implementations, before outputting the target prompt message, it may be detected whether a current operation state of the terminal is an idle state, where the operation state includes a busy state or an idle state, if it is detected that the current operation state of the terminal is the idle state, step S205 is executed, and if it is detected that the current operation state of the terminal is the busy state, the process may be ended.
The idle state may be specifically defined as a state in which an application currently running on the terminal is an entertainment application or an application not running on the terminal, where the entertainment application may be a music application, a video application, a news application, or the like, and if the application running on the terminal is a video application, it is determined that the terminal is in the idle state. The busy state may be a state in which the currently running application is a non-entertainment application, wherein the non-entertainment application may include a phone application, a short message application, an office software application, and the like, and if the application running on the terminal is a phone application, the terminal is determined to be in the busy state.
In implementation manners, a specific manner of the terminal outputting the prompt information corresponding to the target expression may be that the terminal detects a currently running application program, acquires a prompt information output manner corresponding to the application program, and outputs the target prompt information in the prompt information output manner corresponding to the application program.
The different application programs correspond to different prompt information output modes, the prompt information output mode can be a text information output mode, a picture information output mode and a voice information output mode, the text prompt information output mode and the picture information output mode can be automatic popping up on a terminal screen, sliding in above the terminal screen, sliding in below the terminal screen and the like, and prompt information can be output at any position of the terminal screen. For example, the prompt information output mode corresponding to the game application is to display text prompt information on the upper right corner of the game application interface, and the prompt information output mode corresponding to the video application is to display picture prompt information on the upper right corner of the video application interface.
In implementations, the target alert information includes at least pieces of recommended control application information, and when the target alert information is output, the terminal may obtain the current time of operating the terminal, determine, from the at least pieces of recommended control application information, the target recommended control application information according to a correspondence between time and recommended control application information, and output the target recommended control application information.
For example, when the terminal receives an "OK" gesture or a smiling expression input by the user, the recommended control application corresponding to the recommended control application information is automatically opened, and when the terminal receives an "X" gesture or a long closed eye input by the user, the recommended control application corresponding to the recommended control application information is automatically closed.
In the embodiment of the invention, the terminal detects whether the current time is within the preset time range, then the terminal acquires the face image, identifies the facial expression in the face image, detects whether the object corresponding to the facial expression is matched with the target object, acquires the corresponding relation between the expression corresponding to the target object and the prompt information, and , the terminal acquires the corresponding target prompt information according to the facial expression and outputs the target prompt information, wherein the target prompt information comprises care terms and recommendation control application information.
Referring to fig. 3, a schematic flow diagram of information output methods provided for the embodiment of the present invention is shown, in the flow of fig. 3, a user may enter a plurality of types of expressions in a terminal expression library in advance, where the expressions may be "yawning", "crying", and "laugh", and corresponding prompt information is entered for each type of expression, so as to establish a correspondence between an expression and prompt information, the correspondence between a facial expression set by the user and prompt information may be shown in table 1.
An information output apparatus according to an embodiment of the present invention will be described in detail with reference to fig. 4. It should be noted that the information output apparatus shown in fig. 4 is used for executing the method according to the embodiment of the present invention shown in fig. 1 or fig. 2, for convenience of description, only the portion related to the embodiment of the present invention is shown, and details of the specific technology are not disclosed, and reference is made to the embodiment of the present invention shown in fig. 1 or fig. 2.
Referring to fig. 4, a schematic structural diagram of information output devices according to the present invention is shown, where the information output device 40 includes an identification module 401, an acquisition module 402, an output module 403, and a setup module 404.
The recognition module 401 is configured to obtain a face image by a terminal and recognize a facial expression in the face image;
an obtaining module 402, configured to obtain corresponding target prompt information according to the facial expression;
an output module 403, configured to output the target prompt information, where the target prompt information includes care terms and/or recommended control application information.
In implementations, the apparatus further includes an establishing module 404, specifically configured to:
and establishing corresponding relations between at least reference facial expressions and prompt information aiming at the at least reference facial expressions.
In implementations, the correspondence is preset by the system or set by the user.
In implementation, the identifying module 401 is further configured to:
detecting whether the current time is within a preset time range;
and if the current time is detected to be within the preset time range, executing a step that a terminal acquires a face image and identifies the facial expression in the face image.
In implementation manners, the obtaining module 402 is further configured to:
detecting whether an object corresponding to the facial expression is matched with a target object;
and if the object corresponding to the facial expression is detected to be matched with the target object, acquiring the corresponding relation between the expression corresponding to the target object and the prompt information.
In implementation manners, the obtaining module 402 is specifically configured to:
detecting whether the facial expression is matched with any reference facial expressions;
and if so, determining the prompt information corresponding to the reference facial expression matched with the facial expression as target prompt information.
In implementations, the output module 403 is specifically configured to:
acquiring current time;
determining target recommended control application information from the at least pieces of recommended control application information according to the corresponding relation between the time and the recommended control application information;
and outputting the target recommendation control application information.
In implementations, the output module 403 is further configured to:
the method comprises the steps that a terminal receives user operation, wherein the user operation comprises gesture operation and/or expression operation;
and controlling the recommended control application corresponding to the recommended control application information.
In the embodiment of the present invention, the recognition module 401 obtains a face image by a terminal, recognizes a facial expression in the face image, and further , the obtaining module 402 obtains corresponding target prompt information according to the facial expression, and the output module 403 outputs the target prompt information, where the target prompt information includes care words and/or recommended control application information.
Referring to fig. 5, a schematic structural diagram of terminals is provided for an embodiment of the present invention, as shown in fig. 5, the terminal includes at least processors 501, input devices 503, output devices 504, a memory 505, and at least communication buses 502, where the communication buses 502 are used to implement connection communication between these components, where the input devices 503 may be control panels, microphones, or the like, and the output devices 504 may be display screens, or the like, where the memory 505 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least disk memories, and the memory 505 may optionally be at least storage devices located away from the aforementioned processors 501, where the processor 501 may combine the devices described in fig. 5, store sets of program codes in the memory 505, and the processor 501, the input devices 503, and the output devices 504 call the program codes stored in the memory 505, so as to perform the following operations:
the processor 501 is used for acquiring a face image by a terminal and identifying a facial expression in the face image;
the processor 501 is configured to obtain corresponding target prompt information according to the facial expression;
an output device 504 configured to output the target prompt information, where the target prompt information includes care terms and/or recommended control application information.
In implementations, the processor 501 is further configured to:
and establishing corresponding relations between at least reference facial expressions and prompt information aiming at the at least reference facial expressions.
In implementations, the correspondence is preset by the system or set by the user.
In implementations, the processor 501 is further configured to:
detecting whether the current time is within a preset time range;
if the current time is detected to be within the preset time range, the steps of acquiring the face image input by the terminal input device 503 and recognizing the facial expression in the face image are executed.
In implementations, the processor 501 is further configured to:
detecting whether an object corresponding to the facial expression is matched with a target object;
and if the object corresponding to the facial expression is detected to be matched with the target object, acquiring the corresponding relation between the expression corresponding to the target object and the prompt information.
In implementations, the processor 501 is specifically configured to:
detecting whether the facial expression is matched with any reference facial expressions;
and if so, determining the prompt information corresponding to the reference facial expression matched with the facial expression as target prompt information.
In implementations, the processor 501 is specifically configured to:
acquiring current time;
determining target recommended control application information from the at least pieces of recommended control application information according to the corresponding relation between the time and the recommended control application information;
and outputting the target recommendation control application information.
In implementations, the processor 501 is further configured to:
the method comprises the steps that a terminal receives user operation, wherein the user operation comprises gesture operation and/or expression operation;
and controlling the recommended control application corresponding to the recommended control application information.
In the embodiment of the present invention, the processor 501 obtains a face image input by the terminal input device 503, identifies a facial expression in the face image, and further , obtains corresponding target prompt information according to the facial expression, and the output device 504 outputs the target prompt information, where the target prompt information includes care terms and/or recommended control application information.
The module in the embodiment of the present invention may be implemented by a general-purpose integrated circuit, such as a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC).
It should be understood that, in the embodiment of the present invention, the Processor 501 may be a Central Processing Unit (CPU), and the Processor 501 may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), ready-made Programmable arrays (FPGAs) or other Programmable logic devices, discrete component or transistor logic devices, discrete hardware components, etc.
The bus 502 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like, and the bus 502 may be divided into an address bus, a data bus, a control bus, or the like, and fig. 5 illustrates only thick lines for convenience of illustration, but does not illustrate only buses or types of buses.
In a specific implementation, the processor 501, the input device 503, the output device 504, and the memory 505 described in this embodiment of the present invention may execute the implementation described in the method embodiment shown in fig. 1 or fig. 2 provided in this embodiment of the present invention, and may also execute the implementation of the information output apparatus described in this embodiment of the present invention, which is not described herein again.
In another embodiment of the present invention, there are provided computer readable storage media storing a computer program comprising program instructions that, when executed by a processor, implement a terminal to acquire a face image, recognize a facial expression in the face image, and further , acquire corresponding target guidance information according to the facial expression, and output the target guidance information, wherein the target guidance information comprises care terms and/or recommendation control application information.
The computer readable storage medium may be an internal storage unit of the server described in any of the embodiments above, such as a hard disk or a memory of the server, or an external storage device of the server, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the server, further steps, the computer readable storage medium may include both the internal storage unit of the server and the external storage device.
It will be understood by those skilled in the art that all or part of the processes in the methods of the above embodiments may be implemented by instructing the relevant hardware through a computer program, and the program may be stored in computer readable storage medium, and when executed, the program may include the processes of the above embodiments of the methods.
While the invention has been described with reference to a number of embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (11)
1, A method for outputting information, the method comprising:
the method comprises the steps that a terminal obtains a face image and identifies face expressions in the face image;
acquiring corresponding target prompt information according to the facial expression;
and outputting the target prompt information, wherein the target prompt information comprises care words and/or recommendation control application information.
2. The method of claim 1, wherein before the terminal acquires a face image and recognizes a facial expression in the face image, the method further comprises:
and establishing corresponding relations between at least reference facial expressions and prompt information aiming at the at least reference facial expressions.
3. The method of claim 2, wherein the correspondence is preset by a system or set by a user.
4. The method of claim 1, wherein before the terminal acquires a face image and recognizes a facial expression in the face image, the method further comprises:
detecting whether the current time is within a preset time range;
and if the current time is detected to be within the preset time range, executing a step that a terminal acquires a face image and identifies the facial expression in the face image.
5. The method of claim 1, wherein after the terminal acquires the face image and recognizes the facial expression in the face image, and before the step of acquiring the corresponding target prompt information according to the facial expression, the method further comprises:
detecting whether an object corresponding to the facial expression is matched with a target object;
and if the object corresponding to the facial expression is detected to be matched with the target object, acquiring the corresponding relation between the expression corresponding to the target object and the prompt information.
6. The method of claim 2, wherein the obtaining of the corresponding target prompt information according to the facial expression comprises:
detecting whether the facial expression is matched with any reference facial expressions;
and if so, determining the prompt information corresponding to the reference facial expression matched with the facial expression as target prompt information.
7. The method of claim 1, wherein the target prompt information includes at least pieces of recommended control application information, and wherein outputting the target prompt information includes:
acquiring current time;
determining target recommended control application information from the at least pieces of recommended control application information according to the corresponding relation between the time and the recommended control application information;
and outputting the target recommendation control application information.
8. The method of claim 1, wherein after outputting the target hint information, the method further comprises:
the terminal receives user operation, wherein the user operation comprises gesture operation and/or expression operation;
and controlling the recommended control application corresponding to the recommended control application information.
An information output apparatus of , comprising:
the recognition module is used for acquiring a face image by a terminal and recognizing the facial expression in the face image;
the acquisition module is used for acquiring corresponding target prompt information according to the facial expression;
and the output module is used for outputting the target prompt information, and the target prompt information comprises care words and/or recommended control application information.
10, terminal, comprising a processor, an input device, an output device and a memory, the processor, the input device, the output device and the memory being interconnected, wherein the memory is used for storing a computer program comprising program instructions, the processor being configured for invoking the program instructions for performing the method according to any of claims 1-8 and .
A computer-readable storage medium , characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the method of any of claims 1-8 to .
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910956118.4A CN110737334A (en) | 2019-10-09 | 2019-10-09 | information output method, device, terminal and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910956118.4A CN110737334A (en) | 2019-10-09 | 2019-10-09 | information output method, device, terminal and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110737334A true CN110737334A (en) | 2020-01-31 |
Family
ID=69269903
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910956118.4A Pending CN110737334A (en) | 2019-10-09 | 2019-10-09 | information output method, device, terminal and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110737334A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111626253A (en) * | 2020-06-02 | 2020-09-04 | 上海商汤智能科技有限公司 | Expression detection method and device, electronic equipment and storage medium |
CN112346352A (en) * | 2020-11-20 | 2021-02-09 | 深圳Tcl新技术有限公司 | Control prompting method and device for terminal equipment and computer readable storage medium |
CN113065456A (en) * | 2021-03-30 | 2021-07-02 | 上海商汤智能科技有限公司 | Information prompting method and device, electronic equipment and computer storage medium |
-
2019
- 2019-10-09 CN CN201910956118.4A patent/CN110737334A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111626253A (en) * | 2020-06-02 | 2020-09-04 | 上海商汤智能科技有限公司 | Expression detection method and device, electronic equipment and storage medium |
CN112346352A (en) * | 2020-11-20 | 2021-02-09 | 深圳Tcl新技术有限公司 | Control prompting method and device for terminal equipment and computer readable storage medium |
CN113065456A (en) * | 2021-03-30 | 2021-07-02 | 上海商汤智能科技有限公司 | Information prompting method and device, electronic equipment and computer storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108538298B (en) | Voice wake-up method and device | |
CN110737334A (en) | information output method, device, terminal and computer readable storage medium | |
CN110610699B (en) | Voice signal processing method, device, terminal, server and storage medium | |
CN108197450B (en) | Face recognition method, face recognition device, storage medium and electronic equipment | |
CN108345581B (en) | Information identification method and device and terminal equipment | |
CN111063354B (en) | Man-machine interaction method and device | |
CN109101517B (en) | Information processing method, information processing apparatus, and medium | |
CN110769319A (en) | Standby wakeup interaction method and device | |
WO2016061930A1 (en) | Web page coding identification method and device | |
US20180324703A1 (en) | Systems and methods to place digital assistant in sleep mode for period of time | |
CN111428570A (en) | Detection method and device for non-living human face, computer equipment and storage medium | |
WO2024160041A1 (en) | Multi-modal conversation method and apparatus, and device and storage medium | |
CN112908325B (en) | Voice interaction method and device, electronic equipment and storage medium | |
CN111968680B (en) | Voice processing method, device and storage medium | |
CN109215640A (en) | Audio recognition method, intelligent terminal and computer readable storage medium | |
CN111326154A (en) | Voice interaction method and device, storage medium and electronic equipment | |
CN105005489B (en) | A kind of the starting method and terminal device of terminal device | |
WO2024217470A1 (en) | Voice recording method and apparatus, electronic device, and storage medium | |
CN115237301A (en) | Method and device for processing bullet screen in interactive novel | |
CN111626229A (en) | Object management method, device, machine readable medium and equipment | |
US20210166685A1 (en) | Speech processing apparatus and speech processing method | |
KR20200119531A (en) | An electronic device for genrating a natural language response and method thereof | |
CN111816174B (en) | Speech recognition method, device and computer readable storage medium | |
US11373038B2 (en) | Method and terminal for performing word segmentation on text information, and storage medium | |
US20230274565A1 (en) | Image, pattern and character recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |