CN112489654A - Voice interaction method and device, intelligent terminal and storage medium - Google Patents

Voice interaction method and device, intelligent terminal and storage medium Download PDF

Info

Publication number
CN112489654A
CN112489654A CN202011287390.7A CN202011287390A CN112489654A CN 112489654 A CN112489654 A CN 112489654A CN 202011287390 A CN202011287390 A CN 202011287390A CN 112489654 A CN112489654 A CN 112489654A
Authority
CN
China
Prior art keywords
user
semantic analysis
voice
dimensional
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011287390.7A
Other languages
Chinese (zh)
Inventor
周胜杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Konka Electronic Technology Co Ltd
Original Assignee
Shenzhen Konka Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Konka Electronic Technology Co Ltd filed Critical Shenzhen Konka Electronic Technology Co Ltd
Priority to CN202011287390.7A priority Critical patent/CN112489654A/en
Publication of CN112489654A publication Critical patent/CN112489654A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1815Semantic context, e.g. disambiguation of the recognition hypotheses based on word meaning
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a voice interaction method, a voice interaction device, an intelligent terminal and a storage medium, wherein the voice interaction method comprises the following steps: acquiring a voice instruction of a target object; carrying out voice character recognition on the voice instruction to acquire recognition content; performing multi-dimensional semantic analysis based on the identification content to obtain a semantic analysis result; and responding based on the semantic analysis result. After the voice and character recognition is carried out on the voice of the user, multi-dimensional semantic analysis is carried out based on the recognition content, and the dimensionality of the thinking problem of the user is fully considered, so that the real intention of the user is understood based on the multi-dimensional analysis, the accuracy of semantic analysis and recognition is improved, and better voice interaction is provided for the user.

Description

Voice interaction method and device, intelligent terminal and storage medium
Technical Field
The invention relates to the technical field of voice processing and recognition, in particular to a voice interaction method and device, an intelligent terminal and a storage medium.
Background
At present, with the continuous development of scientific technology and the increasingly mature artificial intelligence technology, various intelligent devices are widely applied, and various intelligent devices are visible everywhere in daily life of people. With the pursuit of the user for efficient interaction, voice interaction is widely applied to various intelligent devices, such as intelligent sound boxes, intelligent televisions and the like, as a novel efficient interaction mode. Each intelligent device can meet the requirements of users through voice interaction.
When voice interaction is performed, the intention of the user needs to be recognized based on the acquired user voice, and then response is made to complete the interaction with the user. In the prior art, only the acquired user voice is converted into characters, simple semantic analysis is performed according to the converted characters, and a semantic analysis result is used as the intention of the user and interacts with the user. The problems in the prior art are that the real intention of a user cannot be accurately analyzed and understood based on simple voice-to-text conversion and semantic analysis, the semantic analysis and recognition accuracy is low, and the operation executed during the interaction with the user is not desired by the user, so that the voice interaction of the user is not facilitated, and the user experience is influenced.
Thus, there is still a need for improvement and development of the prior art.
Disclosure of Invention
The invention provides a voice interaction method, a voice interaction device, an intelligent terminal and a storage medium, aiming at the technical problems that in the prior art, only the acquired voice of a user is converted into characters, then simple semantic analysis is carried out according to the converted characters, the semantic analysis result is used as the intention of the user and is interacted with the user, the semantic analysis and identification accuracy is low, and the voice interaction of the user is not facilitated, the voice interaction method, the voice interaction device, the intelligent terminal and the storage medium can acquire the voice instruction of a target object; carrying out voice character recognition on the voice instruction to acquire recognition content; performing multi-dimensional semantic analysis based on the identification content to obtain a semantic analysis result; and responding based on the semantic analysis result. After the voice and character recognition is carried out on the voice of the user, multi-dimensional semantic analysis is carried out based on the recognition content, and the dimensionality of the thinking problem of the user is fully considered, so that the real intention of the user is understood based on the multi-dimensional analysis, the accuracy of semantic analysis and recognition is improved, and better voice interaction is provided for the user.
In order to achieve the above technical effects, a first aspect of the present invention provides a voice interaction method, where the method includes:
acquiring a voice instruction of a target object;
carrying out voice character recognition on the voice instruction to acquire recognition content;
performing multi-dimensional semantic analysis based on the identification content to obtain a semantic analysis result;
and responding based on the semantic analysis result.
Optionally, the performing multidimensional semantic analysis based on the identified content to obtain a semantic analysis result includes:
performing semantic understanding on the identified content to acquire a target field corresponding to the identified content;
acquiring a multi-dimensional analysis strategy corresponding to the target field;
and carrying out multi-dimensional semantic analysis on the identified content based on the multi-dimensional analysis strategy to obtain a semantic analysis result.
Optionally, the obtaining of the multidimensional analysis strategy corresponding to the target field includes:
identifying and acquiring the identity information of the target object;
and acquiring a multi-dimensional analysis strategy corresponding to the target field based on the identity information.
Optionally, the method further includes:
recording behavior habit data of the target object;
and generating a multi-dimensional analysis strategy for the target object based on the behavior habit data of the target object.
Optionally, the responding based on the semantic analysis result includes:
generating an operation instruction based on the semantic analysis result;
and interacting with the target object based on the operation instruction.
A second aspect of the present invention provides a voice interaction apparatus, wherein the apparatus includes:
the instruction acquisition module is used for acquiring a voice instruction of the target object;
the instruction identification module is used for carrying out voice character identification on the voice instruction to acquire identification content;
the semantic analysis module is used for carrying out multi-dimensional semantic analysis based on the identification content to obtain a semantic analysis result;
and the response control module is used for responding based on the semantic analysis result.
Optionally, the semantic analysis module includes:
a target field acquisition unit, configured to perform semantic understanding on the recognition content, and acquire a target field corresponding to the recognition content;
an analysis strategy obtaining unit, configured to obtain a multidimensional analysis strategy corresponding to the target field;
and the multidimensional semantic analysis unit is used for carrying out multidimensional semantic analysis on the identified content based on the multidimensional analysis strategy to obtain a semantic analysis result.
Optionally, the analysis policy obtaining unit includes:
the identity information acquisition subunit is used for identifying and acquiring the identity information of the target object;
and the strategy obtaining subunit is used for obtaining the multidimensional analysis strategy corresponding to the target field based on the identity information.
A third aspect of the present invention provides an intelligent terminal, including a memory, a processor, and a program stored in the memory and executable on the processor, where the program implements any of the steps of the voice interaction method when executed by the processor.
A fourth aspect of the present invention provides a storage medium, wherein the storage medium stores a computer program, and the computer program, when executed by a processor, implements the steps of any one of the voice interaction methods.
In the above, the scheme of the invention obtains the voice instruction of the target object; carrying out voice character recognition on the voice instruction to acquire recognition content; performing multi-dimensional semantic analysis based on the identification content to obtain a semantic analysis result; and responding based on the semantic analysis result. Because the scheme of the invention can carry out multi-dimensional semantic analysis based on the recognition content after carrying out voice character recognition on the voice of the user, the dimensionality of the thinking problem of the user is fully considered, and the real intention of the user is understood based on the multi-dimensional analysis. Therefore, compared with the scheme that only the acquired user voice is converted into characters, simple semantic analysis is carried out according to the converted characters, and the semantic analysis result is used as the intention of the user and interacts with the user in the prior art, the scheme provided by the invention can improve the accuracy of semantic analysis and recognition and is beneficial to providing better voice interaction for the user.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a voice interaction method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating the step S300 in FIG. 1 according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating the step S302 in FIG. 2 according to an embodiment of the present invention;
FIG. 4 is a flow chart of another voice interaction method provided by the embodiment of the invention;
FIG. 5 is a flowchart illustrating the step S400 in FIG. 1 according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a voice interaction apparatus according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a specific structure of the semantic module 630 shown in FIG. 6 according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of the analysis policy obtaining unit 632 in fig. 7 according to an embodiment of the present invention;
fig. 9 is a schematic block diagram of an internal structure of an intelligent terminal according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when …" or "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted depending on the context to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings of the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced otherwise than as specifically described and similarly intended by those of ordinary skill in the art without departing from the spirit of the present invention, which is not limited to the specific embodiments disclosed below.
With the improvement of living standard and the development of science and technology, intelligent equipment is seen everywhere in the life of people. At present, most intelligent devices (such as intelligent sound boxes, intelligent televisions, intelligent air conditioners, intelligent refrigerators and the like) support voice interaction so as to facilitate user operation. With the development of artificial intelligence technology and internet of things technology, voice interaction in the field of artificial intelligence internet of things (AIoT) has also gained wide attention. When voice interaction is performed, the intention of the user needs to be recognized based on the acquired user voice, and then response is made to complete the interaction with the user. In the conventional voice interaction method in the prior art, only the acquired user voice is converted into characters, then simple semantic analysis is performed according to the converted characters, and a semantic analysis result is used as the intention of the user and interacts with the user. The problem in the prior art is that the real intention of a user cannot be accurately analyzed and understood based on simple voice-to-text conversion and semantic analysis, and some execution results of 'answer questions' often appear, so that voice interaction of the user is not facilitated, and the operation experience of the user is greatly influenced. Therefore, when performing voice interaction, a method capable of improving the accuracy of the user voice semantic and intention recognition is needed.
In order to solve the problems in the prior art, the invention provides a voice interaction method, in the embodiment of the invention, a voice instruction of a target object is obtained; carrying out voice character recognition on the voice instruction to acquire recognition content; performing multi-dimensional semantic analysis based on the identification content to obtain a semantic analysis result; and responding based on the semantic analysis result. After the voice and character recognition is carried out on the voice of the user, multi-dimensional semantic analysis is carried out based on the recognition content, and the dimensionality of the thinking problem of the user is fully considered, so that the real intention of the user is understood based on the multi-dimensional analysis, the accuracy of semantic analysis and recognition is improved, and better voice interaction is provided for the user.
Exemplary method
As shown in fig. 1, an embodiment of the present invention provides a voice interaction method, where the method includes the following steps:
step S100, acquiring a voice command of a target object.
The target object is a user needing voice interaction. Optionally, the target object may be a specific user, or may be all users who may issue a voice instruction. For example, for a certain smart device, a voiceprint of a part of specific users may be entered, and only the part of users may be limited to perform voice interaction with the smart device, or may not be limited, so that all users may perform voice interaction with the smart device. Furthermore, a limitation may be set for a part of functions of the smart device, and only a part of specific users may use the corresponding functions. For example, a limit is set on the downloading function of the smart sound box, and all users who may send voice instructions can perform song switching, but only a specific user (such as an administrator) can download songs. Therefore, management of voice interaction of different users is achieved, and user experience is improved.
And step S200, performing voice character recognition on the voice command to acquire recognition content.
The identification content comprises characters corresponding to the voice instruction of the user. And the voice instruction of the user is converted into the text content, so that the further processing and recognition are facilitated.
And step S300, performing multi-dimensional semantic analysis based on the identification content to obtain a semantic analysis result.
The analysis dimensionality in the multi-dimensional semantic analysis comprises one or more of time, environment, festivals, interpersonal relationship, intelligent home data, weather, target object habits and target object behaviors.
Optionally, the dimensions in the multi-dimensional semantic analysis may also include other dimensions, such as a schedule of a user, and the like, which is not specifically limited herein. By considering the dimensionalities as much as possible to carry out semantic analysis, the mode and the dimensionalities of the thinking problems of the user can be simulated, so that the intention of the user can be more accurately understood, and better voice interaction experience is provided for the user.
And step S400, responding based on the semantic analysis result.
Optionally, the response may include responding to the user, controlling the state of other smart devices, scheduling a journey for the user, and the like, which is not specifically limited herein.
As can be seen from the above, the voice interaction method provided by the embodiment of the present invention obtains the voice instruction of the target object; carrying out voice character recognition on the voice instruction to acquire recognition content; performing multi-dimensional semantic analysis based on the identification content to obtain a semantic analysis result; and responding based on the semantic analysis result. Because the scheme of the invention can carry out multi-dimensional semantic analysis based on the recognition content after carrying out voice character recognition on the voice of the user, the dimensionality of the thinking problem of the user is fully considered, and the real intention of the user is understood based on the multi-dimensional analysis. Therefore, compared with the scheme that only the acquired user voice is converted into characters, simple semantic analysis is carried out according to the converted characters, and the semantic analysis result is used as the intention of the user and interacts with the user in the prior art, the scheme provided by the invention can improve the accuracy of semantic analysis and recognition and is beneficial to providing better voice interaction for the user.
Optionally, as shown in fig. 2, in this embodiment, the step S300 includes:
step S301, performing semantic understanding on the identified content, and acquiring a target domain corresponding to the identified content.
The target field is a field corresponding to the recognized text recognition content, and the target field and the corresponding relationship between the target field and the text recognition content may be obtained by presetting based on user habits, analyzing big data based on artificial intelligence, or by corresponding user-defined adjustment settings, which is not limited herein. Optionally, the target fields may include eating, living, traveling, dressing, and the like, and there may be other fields, which are not limited herein.
Step S302, obtaining a multi-dimensional analysis strategy corresponding to the target field.
The multidimensional policy can be stored locally or in a cloud in the form of a policy table. Specifically, the dimensions considered and the corresponding multidimensional analysis strategy are different for different target fields. For example, when the target domain is "eat", the dimensions considered may include time, the user's eating habits, food remaining in the refrigerator, and the like; when the target area is "wear," the dimensions considered may include user gender, age, weather, temperature, user preference, and the like. Therefore, for different problems, the multi-dimensional consideration is carried out in the thinking mode of the user, the real intention of the user is comprehensively understood, the accuracy of semantic analysis and recognition is improved, and better voice interaction experience is provided for the user.
Optionally, the same target field may also correspond to different multidimensional analysis strategies. For example, when a user says that "I am hungry" at two different times in the noon and the evening, the corresponding is the field of "eating", but different multidimensional analysis strategies can be corresponding to the different times in consideration of different requirements of the user for breakfast and lunch.
Step S303, carrying out multi-dimensional semantic analysis on the identified content based on the multi-dimensional analysis strategy to obtain a semantic analysis result.
Therefore, the user requirements are fully considered, and the accuracy of semantic analysis and identification is improved.
Optionally, as shown in fig. 3, in this embodiment, the step S302 includes:
step S3021, identifying and acquiring the identity information of the target object.
Step S3022, obtaining a multidimensional analysis policy corresponding to the target field based on the identity information.
Optionally, the identity information of the user may be obtained through voiceprint recognition of the user, or may also be obtained through face recognition of the user, or may have other recognition manners, which is not specifically limited herein.
In this embodiment, different users may correspond to different multidimensional analysis strategies, so that customized voice interaction is provided for different users, and user experience is improved. Specifically, the multidimensional analysis strategies corresponding to different users can be self-defined by the users, and can also be intelligently generated through recorded user habit data.
Optionally, when the corresponding user identity information cannot be obtained, or the user does not set the corresponding multidimensional analysis strategy and the number of the recorded user habit data is lower than a preset strategy data threshold, and the corresponding multidimensional analysis strategy cannot be intelligently generated for the user, the corresponding default strategy may be obtained for the user. The policy data threshold is a preset critical data threshold capable of intelligently generating a corresponding multidimensional analysis policy for a user. Specifically, the basic attributes of the user can be judged through the voiceprint of the user and the environment where the user is located, if the user is judged to be a white-collar woman in an office of 25 years old, the living habits of people with the same attributes in the cloud area are further obtained according to the basic attributes of the user in a matching mode, multi-dimensional semantic analysis is conducted according to the living habits of the people with the same attributes and partial setting data of the user, and interaction with the user is further completed. For example, when the obtained voice instruction of the user is "i hungry", it may be analyzed and output based on the living habits of people with the same attribute and the schedule setting of the user: "you have good noon and a certain person, have a meal at a restaurant, and need to book in advance", wherein a certain person and a certain restaurant should correspond to a specific restaurant and a name of the person, which is only described as an example.
Optionally, as shown in fig. 4, in this embodiment, the voice interaction method further includes:
step A100, recording behavior habit data of the target object.
Specifically, the behavior habit data of the user can be recorded in real time through all the associated intelligent devices. Optionally, when there are multiple users, behavior habit data of different users may be recorded in the database corresponding to the user, so as to avoid confusion of behavior habit data of different users.
Step A200, generating a multi-dimensional analysis strategy for the target object based on the behavior habit data of the target object.
Specifically, when the recorded behavior habit data of the user reaches the preset policy data threshold, a multidimensional analysis policy may be generated for the user. Optionally, when the user already has a corresponding multidimensional analysis strategy and the behavior habit of the user changes, the multidimensional analysis strategy may also be updated correspondingly, so as to provide better interaction experience for the user.
Optionally, as shown in fig. 5, in this embodiment, the step S400 includes:
step S401, generating an operation instruction based on the semantic analysis result.
And step S402, interacting with the target object based on the operation instruction.
Specifically, after the semantic analysis result is obtained and the real intention of the user is clarified, an operation instruction can be generated, and interaction with the target object is performed based on the operation instruction. The operation instruction may include a voice reply instruction and a control instruction, and the operation instruction may be used to perform operation control on all associated smart devices, so as to improve user interaction experience.
In this embodiment, the voice interaction process is specifically exemplified. For example, in an application scenario, a voice instruction "i hungry" sent by a user is received, and voice and text recognition is performed on the voice instruction to acquire a recognition content "i hungry". And performing simple semantic understanding on the identification content, acquiring that the identification content is related to the 'eating' field, acquiring a multi-dimensional analysis strategy related to the 'eating', and performing multi-dimensional semantic analysis based on the multi-dimensional analysis strategy. Specifically, the following policy steps are performed: a strategy step a, acquiring the current time, specifically, judging when the current time period in which the user is hungry is morning, noon, evening, morning, afternoon, late night or other time periods, so as to further judge whether the current time period is a meal or a non-meal; a strategy step b, acquiring user habits based on the current time, specifically, if the time that the user is hungry is judged to be the breakfast point, acquiring the user habits at the corresponding time, and judging whether the user likes to cook breakfast at home, take breakfast from a refrigerator or eat breakfast at the periphery; and a strategy step c, judging the user intention based on the user habits, and taking the user intention as a semantic analysis result, wherein specifically, if the user habits are that breakfast is taken from the refrigerator, the user intention can be to obtain a breakfast food list in the refrigerator and recommend the breakfast food list to the user, and if the user habits are that breakfast is taken around, the user intention can be to obtain the weather today and recommend a corresponding breakfast shop.
Optionally, if the current time is noon, that is, the time period during which the user is hungry is lunch, the lunch habit corresponding to the user should be obtained in the specific policy step, and the specific process is similar to the above process and is not described herein again.
Further, after the multi-dimensional semantic analysis is performed, a response can be made based on the user intention. For example, when the user intends to get a list of breakfast foods in a refrigerator and make a recommendation to the user, the smart refrigerator may be accessed, the list of breakfast foods obtained, and the user is answered with speech: "there is some food you like to eat most in the refrigerator, please eat breakfast on time". If the breakfast food which is frequently eaten by the user is not available in the refrigerator of the user and the breakfast food list is empty, the user can be reminded to purchase and recommended to purchase the list. When the user intends to acquire the weather today and recommends the corresponding breakfast shop, the weather information can be accessed, and if the weather is sunny, the user can be replied by voice: "the weather is clear, and breakfast can be eaten outside. And recommending the breakfast shop for the user, and navigating the user according to the data of the surrounding breakfast shops. Therefore, the real intention of the user is understood based on multi-dimensional analysis, the accuracy of semantic analysis and recognition is improved, and better voice interaction experience is provided for the user.
In another application scenario, a voice instruction 'i feels very stuffy' sent by a user is received, voice and character recognition is carried out on the voice instruction, and a recognition content 'i feels very stuffy' is obtained. And performing simple semantic understanding on the identification content, acquiring that the identification content is related to the 'perception' field, acquiring a multidimensional analysis strategy related to 'perception', performing multidimensional semantic analysis based on the multidimensional strategy, and responding. Specifically, the following policy steps are executed and responded at the same time, and interaction is performed: and a strategy step a, analyzing from a physical layer, specifically, acquiring weather information and temperature information, confirming whether doors and windows open ventilation, whether the air conditioner temperature is proper or not, and the like, if the doors and the air conditioner are not properly arranged, generating an adjustment control instruction, feeding back the investigation result and the adjustment intention to a user, executing after the user confirms execution, and confirming whether the comfort level is proper or not after the adjustment control instruction is executed for a certain time. And a strategy step b, analyzing from a psychological level, specifically, adjusting the family environment atmosphere, such as adjusting light and music, and other adjusting modes can be provided, which are not limited specifically herein. And a policy step c, analyzing from a social level, specifically, obtaining social information of the user, analyzing based on the social information and giving a suggestion, for example, the user may be answered by voice: "if today is a festival, go out with someone for eating" or "find that someone is somewhere, help you make an appointment", can also dynamically distract the user through real-time social contact, etc. And a strategy step d, analyzing physiologically, specifically, detecting whether the user is in a pathological characteristic state or not by temperature testing of the camera and acquiring physiological characteristics of the user through the intelligent bracelet, if so, reminding the user to take a doctor, inquiring case records and medication records, if so, reminding the user to take a medicine, and if so, starting an emergency medical system.
Exemplary device
As shown in fig. 6, corresponding to the voice interaction method, an embodiment of the present invention further provides a voice interaction apparatus, where the voice interaction apparatus includes:
and the instruction obtaining module 610 is used for obtaining the voice instruction of the target object.
The target object is a user needing voice interaction. Optionally, the target object may be a specific user, or may be all users who may issue a voice instruction. For example, for a certain smart device, a voiceprint of a part of specific users may be entered, and only the part of users may be limited to perform voice interaction with the smart device, or may not be limited, so that all users may perform voice interaction with the smart device. Furthermore, a limitation may be set for a part of functions of the smart device, and only a part of specific users may use the corresponding functions. For example, a limit is set on the downloading function of the smart sound box, and all users who may send voice instructions can perform song switching, but only a specific user (such as an administrator) can download songs. Therefore, management of voice interaction of different users is achieved, and user experience is improved.
And the instruction recognition module 620 is configured to perform speech and text recognition on the speech instruction to obtain recognition content.
The identification content comprises characters corresponding to the voice instruction of the user. And the voice instruction of the user is converted into the text content, so that the further processing and recognition are facilitated.
And the semantic analysis module 630 is configured to perform multidimensional semantic analysis based on the identification content to obtain a semantic analysis result.
The analysis dimensionality in the multi-dimensional semantic analysis comprises one or more of time, environment, festivals, interpersonal relationship, intelligent home data, weather, target object habits and target object behaviors.
Optionally, the dimensions in the multi-dimensional semantic analysis may also include other dimensions, such as a schedule of a user, and the like, which is not specifically limited herein. By considering the dimensionalities as much as possible to carry out semantic analysis, the mode and the dimensionalities of the thinking problems of the user can be simulated, so that the intention of the user can be more accurately understood, and better voice interaction experience is provided for the user.
And the response control module 640 is used for making a response based on the semantic analysis result.
Optionally, the response may include responding to the user, controlling the state of other smart devices, scheduling a journey for the user, and the like, which is not specifically limited herein.
As can be seen from the above, the voice interaction apparatus provided in the embodiment of the present invention obtains the voice instruction of the target object through the instruction obtaining module 610; performing voice character recognition on the voice command through a command recognition module 620 to obtain recognition content; performing multi-dimensional semantic analysis based on the identified content through a semantic analysis module 630 to obtain a semantic analysis result; a response is made by the response control module 640 based on the semantic analysis results described above. Because the scheme of the invention can carry out multi-dimensional semantic analysis based on the recognition content after carrying out voice character recognition on the voice of the user, the dimensionality of the thinking problem of the user is fully considered, and the real intention of the user is understood based on the multi-dimensional analysis. Therefore, compared with the scheme that only the acquired user voice is converted into characters, simple semantic analysis is carried out according to the converted characters, and the semantic analysis result is used as the intention of the user and interacts with the user in the prior art, the scheme provided by the invention can improve the accuracy of semantic analysis and recognition and is beneficial to providing better voice interaction for the user.
Optionally, as shown in fig. 7, in this embodiment, the semantic analysis module 630 includes:
the target region acquiring unit 631 is configured to perform semantic understanding on the recognition content and acquire a target region corresponding to the recognition content.
The target field is a field corresponding to the recognized text recognition content, and the target field and the corresponding relationship between the target field and the text recognition content may be obtained by presetting based on user habits, analyzing big data based on artificial intelligence, or by corresponding user-defined adjustment settings, which is not limited herein. Optionally, the target fields may include eating, living, traveling, dressing, and the like, and there may be other fields, which are not limited herein.
An analysis policy obtaining unit 632 is configured to obtain a multidimensional analysis policy corresponding to the target field.
The multidimensional policy can be stored locally or in a cloud in the form of a policy table. Specifically, the dimensions considered and the corresponding multidimensional analysis strategy are different for different target fields. For example, when the target domain is "eat", the dimensions considered may include time, the user's eating habits, food remaining in the refrigerator, and the like; when the target area is "wear," the dimensions considered may include user gender, age, weather, temperature, user preference, and the like. Therefore, for different problems, the multi-dimensional consideration is carried out in the thinking mode of the user, the real intention of the user is comprehensively understood, the accuracy of semantic analysis and recognition is improved, and better voice interaction experience is provided for the user.
Optionally, the same target field may also correspond to different multidimensional analysis strategies. For example, when a user says that "I am hungry" at two different times in the noon and the evening, the corresponding is the field of "eating", but different multidimensional analysis strategies can be corresponding to the different times in consideration of different requirements of the user for breakfast and lunch.
And the multidimensional semantic analysis unit 633 is used for carrying out multidimensional semantic analysis on the identified content based on the multidimensional analysis strategy to obtain a semantic analysis result.
Therefore, the user requirements are fully considered, and the accuracy of semantic analysis and identification is improved.
Optionally, as shown in fig. 8, in this embodiment, the analysis policy obtaining unit 632 includes:
an identity information acquiring subunit 6321, configured to identify and acquire the identity information of the target object.
A policy obtaining subunit 6322, configured to obtain, based on the identity information, a multidimensional analysis policy corresponding to the target field.
Optionally, the identity information of the user may be obtained through voiceprint recognition of the user, or may also be obtained through face recognition of the user, or may have other recognition manners, which is not specifically limited herein.
In this embodiment, different users may correspond to different multidimensional analysis strategies, so that customized voice interaction is provided for different users, and user experience is improved. Specifically, the multidimensional analysis strategies corresponding to different users can be self-defined by the users, and can also be intelligently generated through recorded user habit data.
Optionally, when the corresponding user identity information cannot be obtained, or the user does not set the corresponding multidimensional analysis strategy and the number of the recorded user habit data is lower than a preset strategy data threshold, and the corresponding multidimensional analysis strategy cannot be intelligently generated for the user, the corresponding default strategy may be obtained for the user. The policy data threshold is a preset critical data threshold capable of intelligently generating a corresponding multidimensional analysis policy for a user. Specifically, the basic attributes of the user can be judged through the voiceprint of the user and the environment where the user is located, if the user is judged to be a white-collar woman in an office of 25 years old, the living habits of people with the same attributes in the cloud area are further obtained according to the basic attributes of the user in a matching mode, multi-dimensional semantic analysis is conducted according to the living habits of the people with the same attributes and partial setting data of the user, and interaction with the user is further completed. For example, when the obtained voice instruction of the user is "i hungry", it may be analyzed and output based on the living habits of people with the same attribute and the schedule setting of the user: "you have good noon and a certain person, have a meal at a restaurant, and need to book in advance", wherein a certain person and a certain restaurant should correspond to a specific restaurant and a name of the person, which is only described as an example.
Optionally, the voice interaction apparatus further includes: the habit data recording module is used for recording behavior habit data of the target object; and the strategy generation module is used for generating a multi-dimensional analysis strategy for the target object based on the behavior habit data of the target object.
Specifically, the behavior habit data of the user can be recorded in real time through all the associated intelligent devices. Optionally, when there are multiple users, behavior habit data of different users may be recorded in the database corresponding to the user, so as to avoid confusion of behavior habit data of different users.
Specifically, when the recorded behavior habit data of the user reaches the preset policy data threshold, a multidimensional analysis policy may be generated for the user. Optionally, when the user already has a corresponding multidimensional analysis strategy and the behavior habit of the user changes, the multidimensional analysis strategy may also be updated correspondingly, so as to provide better interaction experience for the user.
Optionally, the response control module is specifically configured to: generating an operation instruction based on the semantic analysis result; and interacting with the target object based on the operation instruction.
Specifically, after the semantic analysis result is obtained and the real intention of the user is clarified, an operation instruction can be generated, and interaction with the target object is performed based on the operation instruction. The operation instruction may include a voice reply instruction and a control instruction, and the operation instruction may be used to perform operation control on all associated smart devices, so as to improve user interaction experience.
In this embodiment, the voice interaction process is specifically exemplified. For example, in an application scenario, a voice instruction "i hungry" sent by a user is received, and voice and text recognition is performed on the voice instruction to acquire a recognition content "i hungry". And performing simple semantic understanding on the identification content, acquiring that the identification content is related to the 'eating' field, acquiring a multi-dimensional analysis strategy related to the 'eating', and performing multi-dimensional semantic analysis based on the multi-dimensional analysis strategy. Specifically, the following policy steps are performed: a strategy step a, acquiring the current time, specifically, judging when the current time period in which the user is hungry is morning, noon, evening, morning, afternoon, late night or other time periods, so as to further judge whether the current time period is a meal or a non-meal; a strategy step b, acquiring user habits based on the current time, specifically, if the time that the user is hungry is judged to be the breakfast point, acquiring the user habits at the corresponding time, and judging whether the user likes to cook breakfast at home, take breakfast from a refrigerator or eat breakfast at the periphery; and a strategy step c, judging the user intention based on the user habits, and taking the user intention as a semantic analysis result, wherein specifically, if the user habits are that breakfast is taken from the refrigerator, the user intention can be to obtain a breakfast food list in the refrigerator and recommend the breakfast food list to the user, and if the user habits are that breakfast is taken around, the user intention can be to obtain the weather today and recommend a corresponding breakfast shop.
Optionally, if the current time is noon, that is, the time period during which the user is hungry is lunch, the lunch habit corresponding to the user should be obtained in the specific policy step, and the specific process is similar to the above process and is not described herein again.
Further, after the multi-dimensional semantic analysis is performed, a response can be made based on the user intention. For example, when the user intends to get a list of breakfast foods in a refrigerator and make a recommendation to the user, the smart refrigerator may be accessed, the list of breakfast foods obtained, and the user is answered with speech: "there is some food you like to eat most in the refrigerator, please eat breakfast on time". If the breakfast food which is frequently eaten by the user is not available in the refrigerator of the user and the breakfast food list is empty, the user can be reminded to purchase and recommended to purchase the list. When the user intends to acquire the weather today and recommends the corresponding breakfast shop, the weather information can be accessed, and if the weather is sunny, the user can be replied by voice: "the weather is clear, and breakfast can be eaten outside. And recommending the breakfast shop for the user, and navigating the user according to the data of the surrounding breakfast shops. Therefore, the real intention of the user is understood based on multi-dimensional analysis, the accuracy of semantic analysis and recognition is improved, and better voice interaction experience is provided for the user.
In another application scenario, a voice instruction 'i feels very stuffy' sent by a user is received, voice and character recognition is carried out on the voice instruction, and a recognition content 'i feels very stuffy' is obtained. And performing simple semantic understanding on the identification content, acquiring that the identification content is related to the 'perception' field, acquiring a multidimensional analysis strategy related to 'perception', performing multidimensional semantic analysis based on the multidimensional strategy, and responding. Specifically, the following policy steps are executed and responded at the same time, and interaction is performed: and a strategy step a, analyzing from a physical layer, specifically, acquiring weather information and temperature information, confirming whether doors and windows open ventilation, whether the air conditioner temperature is proper or not, and the like, if the doors and the air conditioner are not properly arranged, generating an adjustment control instruction, feeding back the investigation result and the adjustment intention to a user, executing after the user confirms execution, and confirming whether the comfort level is proper or not after the adjustment control instruction is executed for a certain time. And a strategy step b, analyzing from a psychological level, specifically, adjusting the family environment atmosphere, such as adjusting light and music, and other adjusting modes can be provided, which are not limited specifically herein. And a policy step c, analyzing from a social level, specifically, obtaining social information of the user, analyzing based on the social information and giving a suggestion, for example, the user may be answered by voice: "if today is a festival, go out with someone for eating" or "find that someone is somewhere, help you make an appointment", can also dynamically distract the user through real-time social contact, etc. And a strategy step d, analyzing physiologically, specifically, detecting whether the user is in a pathological characteristic state or not by temperature testing of the camera and acquiring physiological characteristics of the user through the intelligent bracelet, if so, reminding the user to take a doctor, inquiring case records and medication records, if so, reminding the user to take a medicine, and if so, starting an emergency medical system.
Based on the above embodiment, the present invention further provides an intelligent terminal, and a schematic block diagram thereof may be as shown in fig. 9. The intelligent terminal comprises a processor, a memory, a network interface and a display screen which are connected through a system bus. Wherein, the processor of the intelligent terminal is used for providing calculation and control capability. The memory of the intelligent terminal comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the intelligent terminal is used for being connected and communicated with an external terminal through a network. The computer program, when executed by a processor, implements the steps of any of the voice interaction methods described above. The display screen of the intelligent terminal can be a liquid crystal display screen or an electronic ink display screen.
It will be understood by those skilled in the art that the block diagram of fig. 9 is only a block diagram of a part of the structure related to the solution of the present invention, and does not constitute a limitation to the intelligent terminal to which the solution of the present invention is applied, and a specific intelligent terminal may include more or less components than those shown in the figure, or combine some components, or have different arrangements of components.
In one embodiment, an intelligent terminal is provided, which includes a memory, a processor, and a program stored in the memory and executable on the processor, and when executed by the processor, the program performs the following operations:
acquiring a voice instruction of a target object;
carrying out voice character recognition on the voice instruction to acquire recognition content;
performing multi-dimensional semantic analysis based on the identification content to obtain a semantic analysis result;
and responding based on the semantic analysis result.
The embodiment of the present invention further provides a storage medium, where the storage medium stores a computer program, and the computer program, when executed by a processor, implements the steps of any one of the voice interaction methods provided in the embodiments of the present invention.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art would appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the above modules or units is only one logical division, and the actual implementation may be implemented by another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
The integrated modules/units described above, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method according to the above embodiments may be implemented by a computer program, which may be stored in a computer-readable storage medium and used by a processor to implement the steps of the above embodiments of the method. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer readable medium may include: any entity or device capable of carrying the above-mentioned computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the contents contained in the computer-readable storage medium can be increased or decreased as required by legislation and patent practice in the jurisdiction.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art; the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein.

Claims (10)

1. A method of voice interaction, the method comprising:
acquiring a voice instruction of a target object;
carrying out voice character recognition on the voice instruction to acquire recognition content;
performing multi-dimensional semantic analysis based on the identification content to obtain a semantic analysis result;
responding based on the semantic analysis result.
2. The voice interaction method according to claim 1, wherein performing multidimensional semantic analysis based on the recognition content to obtain a semantic analysis result comprises:
performing semantic understanding on the identified content to acquire a target field corresponding to the identified content;
acquiring a multi-dimensional analysis strategy corresponding to the target field;
and carrying out multi-dimensional semantic analysis on the identified content based on the multi-dimensional analysis strategy to obtain a semantic analysis result.
3. The voice interaction method according to claim 2, wherein the obtaining of the multidimensional analysis strategy corresponding to the target field comprises:
identifying and acquiring identity information of the target object;
and acquiring a multi-dimensional analysis strategy corresponding to the target field based on the identity information.
4. The voice interaction method of claim 1, further comprising:
recording behavior habit data of the target object;
and generating a multi-dimensional analysis strategy for the target object based on the behavior habit data of the target object.
5. The voice interaction method according to any one of claims 1 to 4, wherein the responding based on the semantic analysis result comprises:
generating an operation instruction based on the semantic analysis result;
and interacting with the target object based on the operation instruction.
6. A voice interaction apparatus, comprising:
the instruction acquisition module is used for acquiring a voice instruction of the target object;
the instruction identification module is used for carrying out voice character identification on the voice instruction to acquire identification content;
the semantic analysis module is used for carrying out multi-dimensional semantic analysis based on the identification content to obtain a semantic analysis result;
and the response control module is used for responding based on the semantic analysis result.
7. The apparatus according to claim 6, wherein the semantic analysis module comprises:
a target field acquisition unit, configured to perform semantic understanding on the recognition content, and acquire a target field corresponding to the recognition content;
the analysis strategy acquisition unit is used for acquiring a multi-dimensional analysis strategy corresponding to the target field;
and the multidimensional semantic analysis unit is used for carrying out multidimensional semantic analysis on the identified content based on the multidimensional analysis strategy to obtain a semantic analysis result.
8. The apparatus according to claim 7, wherein the analysis policy obtaining unit comprises:
the identity information acquisition subunit is used for identifying and acquiring the identity information of the target object;
and the strategy obtaining subunit is used for obtaining the multidimensional analysis strategy corresponding to the target field based on the identity information.
9. An intelligent terminal, comprising a memory, a processor, and a program stored on the memory and executable on the processor, the program, when executed by the processor, implementing the steps of the method according to any one of claims 1 to 5.
10. A storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the method according to any one of claims 1-5.
CN202011287390.7A 2020-11-17 2020-11-17 Voice interaction method and device, intelligent terminal and storage medium Pending CN112489654A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011287390.7A CN112489654A (en) 2020-11-17 2020-11-17 Voice interaction method and device, intelligent terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011287390.7A CN112489654A (en) 2020-11-17 2020-11-17 Voice interaction method and device, intelligent terminal and storage medium

Publications (1)

Publication Number Publication Date
CN112489654A true CN112489654A (en) 2021-03-12

Family

ID=74930999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011287390.7A Pending CN112489654A (en) 2020-11-17 2020-11-17 Voice interaction method and device, intelligent terminal and storage medium

Country Status (1)

Country Link
CN (1) CN112489654A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494267A (en) * 2021-11-30 2022-05-13 北京国网富达科技发展有限责任公司 Substation and cable tunnel scene semantic construction system and method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050966A (en) * 2013-03-12 2014-09-17 百度国际科技(深圳)有限公司 Voice interaction method of terminal equipment and terminal equipment employing voice interaction method
CN106886162A (en) * 2017-01-13 2017-06-23 深圳前海勇艺达机器人有限公司 The method of smart home management and its robot device
CN107832286A (en) * 2017-09-11 2018-03-23 远光软件股份有限公司 Intelligent interactive method, equipment and storage medium
CN109543016A (en) * 2018-11-15 2019-03-29 北京搜狗科技发展有限公司 A kind of data processing method, device and the device for data processing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050966A (en) * 2013-03-12 2014-09-17 百度国际科技(深圳)有限公司 Voice interaction method of terminal equipment and terminal equipment employing voice interaction method
CN106886162A (en) * 2017-01-13 2017-06-23 深圳前海勇艺达机器人有限公司 The method of smart home management and its robot device
CN107832286A (en) * 2017-09-11 2018-03-23 远光软件股份有限公司 Intelligent interactive method, equipment and storage medium
CN109543016A (en) * 2018-11-15 2019-03-29 北京搜狗科技发展有限公司 A kind of data processing method, device and the device for data processing

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494267A (en) * 2021-11-30 2022-05-13 北京国网富达科技发展有限责任公司 Substation and cable tunnel scene semantic construction system and method
CN114494267B (en) * 2021-11-30 2022-11-04 北京国网富达科技发展有限责任公司 Substation and cable tunnel scene semantic construction system and method

Similar Documents

Publication Publication Date Title
US11809830B1 (en) Personalized surveys to improve patient engagement in health research
CN109196491B (en) Machine intelligent predictive communication and control system
US11080775B2 (en) Recommending meals for a selected group
RU2442213C2 (en) Searching mechanism control panel
Sonderegger et al. The influence of product aesthetics and usability over the course of time: a longitudinal field experiment
US6549915B2 (en) Storing and recalling information to augment human memories
US20240112589A1 (en) Behavior change system
CN109859812B (en) Intelligent child-care cloud service system
US20180060494A1 (en) Patient Treatment Recommendations Based on Medical Records and Exogenous Information
US11636439B2 (en) Techniques to apply machine learning to schedule events of interest
US20180268821A1 (en) Virtual assistant for generating personal suggestions to a user based on intonation analysis of the user
WO2021017306A1 (en) Personalized search method, system, and device employing user portrait, and storage medium
US9679109B2 (en) Interview system, server system, server device, information terminal, interview method, information processing method, and program
CN112951373A (en) Food material recommendation method and device, intelligent refrigerator and intelligent terminal
Atkinson et al. ‘Hey Alexa, what did I forget?’: Networked devices, Internet search and the delegation of human memory
CN112489654A (en) Voice interaction method and device, intelligent terminal and storage medium
Visuri et al. Understanding usage style transformation during long-term smartwatch use
US10758159B2 (en) Measuring somatic response to stimulus utilizing a mobile computing device
Yao et al. The framing effect of negation frames
US20200211062A1 (en) System and method utilizing sensor and user-specific sensitivity information for undertaking targeted actions
CN110688561A (en) Method and device for determining dietary preference and computer storage medium
US20190141418A1 (en) A system and method for generating one or more statements
CN116913526B (en) Normalization feature set up-sampling method and device, electronic equipment and storage medium
CN116913525B (en) Feature group normalization method, device, electronic equipment and storage medium
CN107910054A (en) Sleep state determines method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination