CN109492221B - Information reply method based on semantic analysis and wearable equipment - Google Patents

Information reply method based on semantic analysis and wearable equipment Download PDF

Info

Publication number
CN109492221B
CN109492221B CN201811281974.6A CN201811281974A CN109492221B CN 109492221 B CN109492221 B CN 109492221B CN 201811281974 A CN201811281974 A CN 201811281974A CN 109492221 B CN109492221 B CN 109492221B
Authority
CN
China
Prior art keywords
information
replied
voice
candidate
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811281974.6A
Other languages
Chinese (zh)
Other versions
CN109492221A (en
Inventor
吴磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201811281974.6A priority Critical patent/CN109492221B/en
Publication of CN109492221A publication Critical patent/CN109492221A/en
Application granted granted Critical
Publication of CN109492221B publication Critical patent/CN109492221B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/225Feedback of the input speech
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Machine Translation (AREA)

Abstract

The embodiment of the invention relates to the technical field of wearable equipment, and discloses an information reply method based on semantic analysis and wearable equipment, wherein the method comprises the following steps: obtaining information to be replied, extracting target information from the information to be replied, and judging the sentence type corresponding to the information to be replied according to the target information; when the sentence type is a preset sentence type, carrying out semantic analysis on the information to be replied to generate at least one piece of candidate information for the user to select; and detecting target candidate information selected by a user from at least one piece of candidate information, and taking the target candidate information as target reply information aiming at the information to be replied. By implementing the embodiment of the invention, the time for inputting the reply information by the user can be saved, the period for the user to communicate with other people is shortened, and the communication efficiency and the expression accuracy are improved.

Description

Information reply method based on semantic analysis and wearable equipment
Technical Field
The invention relates to the technical field of wearable equipment, in particular to an information reply method based on semantic analysis and wearable equipment.
Background
With the development of science and technology, wearable devices are increasingly widely used in life, and have more and more functions, so that conversation, video conversation, short message receiving and transmitting and the like can be realized. Currently, when a user replies information by using a wearable device, a manual input mode and a voice input mode are mainly adopted. However, due to the limitation of the screen of the wearable device, the problem that the efficiency and accuracy of inputting the reply information are low exists due to the fact that the mode of manually inputting the information is adopted, meanwhile, errors are frequently easy to identify due to the mode of inputting the information through voice, the input reply information is incorrect, so that the time for inputting the reply information by a user is greatly consumed, the period of exchanging the user with other people is prolonged, and the efficiency of exchanging and the accuracy of expression are greatly reduced. Therefore, the problem of low efficiency and accuracy exists in the two information reply modes, and the experience of using the wearable equipment by a user is affected.
Disclosure of Invention
The embodiment of the invention discloses an information reply method and wearable equipment based on semantic analysis, which can shorten the period of communication between a user and other people and improve the communication efficiency and the expression accuracy.
The first aspect of the embodiment of the invention discloses an information reply method based on semantic analysis, which comprises the following steps:
obtaining information to be replied;
extracting target information from the information to be replied, wherein the target information comprises at least one of keywords and symbols;
judging the sentence type corresponding to the information to be replied according to the target information;
when the sentence type is a preset sentence type, carrying out semantic analysis on the information to be replied to generate at least one piece of candidate information for selection by a user; the preset sentence types comprise question sentences or back question sentences;
and detecting target candidate information selected by the user from the at least one piece of candidate information, and taking the target candidate information as target reply information aiming at the information to be replied.
In an optional implementation manner, in a first aspect of the embodiment of the present invention, when the sentence type is a preset sentence type, performing semantic analysis on the information to be replied to generate at least one piece of candidate information for selection by a user, where the method includes:
Outputting consultation information for indicating a user to input any voice detection information when the sentence type is a preset sentence type;
after voice detection information input by the user is detected, extracting voiceprint features from the voice detection information, and identifying the age of the user according to the voiceprint features;
when the age of the user is in the age range for representing children or old people, detecting whether the language type of the voice detection information is matched with any one of the plurality of preset language types;
if so, converting the information to be replied into voice information to be played, the types of the languages of which are matched with those of the voice detection information, and playing the voice information to be played so that the user can know the content of the information to be replied;
generating at least one piece of candidate information according to the information to be replied, converting the at least one piece of candidate information into at least one piece of candidate voice information matched with the language type of the voice detection information, wherein one piece of candidate voice information corresponds to one piece of candidate information;
sequentially playing the at least one piece of candidate voice information;
the detecting the target candidate information selected by the user from the at least one piece of candidate information, taking the target candidate information as target reply information aiming at the information to be replied, includes:
Detecting target reply voice information input by the user, and taking the target reply voice information as target reply information aiming at the information to be replied, wherein the target reply voice information is any one candidate voice information in the at least one candidate voice information.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after the obtaining the information to be replied and before the extracting the target information from the information to be replied, the method further includes:
identifying the information to be replied, and judging whether the information to be replied is voice information or not;
when the information to be replied is judged to be the voice information, recognizing the language type of the voice information;
judging whether the language type of the voice information is matched with any one of the preset language types;
if so, converting the voice information into first text information matched with the language type of the voice information, and taking the first text information as information to be processed;
if the first text information is not matched with the second text information, the voice information is sent to a network server, so that the network server translates the first text information into the second text information according to a language model of a preset language, and the second text information is used as the information to be processed;
The extracting the target information from the information to be replied comprises the following steps:
and extracting target information from the information to be processed.
In an optional implementation manner, in a first aspect of the embodiment of the present invention, when the sentence type is a preset sentence type, performing semantic analysis on the information to be replied to generate at least one piece of candidate information for selection by a user, where the method includes:
when the sentence type is a preset sentence type, carrying out semantic analysis on the information to be replied to obtain a semantic analysis result of the information to be replied;
selecting related reference information according to the semantic analysis result and the target information, wherein the related reference information comprises schedule information and/or current position information;
and generating at least one piece of candidate information for selection by a user according to the semantic analysis result and the related reference information.
In an optional implementation manner, in a first aspect of the embodiment of the present invention, when the sentence type is a preset sentence type, performing semantic analysis on the information to be replied to obtain a semantic analysis result of the information to be replied, where the semantic analysis result includes:
when the sentence type is a preset sentence type, dividing words of the information to be replied, and marking the parts of speech of the divided words to obtain a plurality of marked words marked with the parts of speech;
Calculating the weight value of each labeling word;
and determining the labeled words with the weight values larger than or equal to a preset threshold value as keywords, and generating semantic analysis results of the information to be replied according to the keywords.
A second aspect of an embodiment of the present invention discloses a wearable device, including:
the acquisition unit is used for acquiring the information to be replied;
the extraction unit is used for extracting target information from the information to be replied, wherein the target information comprises at least one of keywords and symbols;
the first judging unit is used for judging the sentence type corresponding to the information to be replied according to the target information;
the analysis generating unit is used for carrying out semantic analysis on the information to be replied when the first judging unit judges that the sentence type is the preset sentence type, and generating at least one piece of candidate information for the user to select; the preset sentence types comprise question sentences or back question sentences;
the detection unit is used for detecting target candidate information selected by the user from the at least one piece of candidate information, and taking the target candidate information as target reply information aiming at the information to be replied.
In a second aspect of the embodiment of the present invention, when the first determining unit determines that the sentence type is a preset sentence type, the analyzing generating unit is configured to perform semantic analysis on the information to be replied to generate at least one candidate information for selection by a user, where the method specifically includes:
the analysis generating unit is used for outputting consultation information for indicating a user to input any voice detection information when the first judging unit judges that the sentence type is the preset sentence type; after voice detection information input by the user is detected, extracting voiceprint features from the voice detection information, and identifying the age of the user according to the voiceprint features; and detecting whether the language type of the voice detection information is matched with any one of the plurality of preset language types when the age of the user is within an age range for representing children or old people; if so, converting the information to be replied into voice information to be played, the types of which are matched with the types of the voice detection information, and playing the voice information to be played so that the user can know the content of the information to be replied; generating at least one piece of candidate information according to the information to be replied, converting the at least one piece of candidate information into at least one piece of candidate voice information matched with the language type of the voice detection information, wherein one piece of candidate voice information corresponds to one piece of candidate information; sequentially playing the at least one piece of candidate voice information;
The detection unit is specifically configured to detect target reply voice information input by the user, and take the target reply voice information as target reply information for the information to be replied, where the target reply voice information is any one candidate voice information in the at least one candidate voice information.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the wearable device further includes:
the second judging unit is used for identifying the information to be replied after the obtaining unit obtains the information to be replied and before the extracting unit extracts the target information from the information to be replied, and judging whether the information to be replied is voice information or not;
the identification unit is used for identifying the language type of the voice information when the second judgment unit judges that the information to be replied is the voice information;
the third judging unit is used for judging whether the language type of the voice information is matched with any one of a plurality of preset language types;
the conversion unit is used for converting the voice information into first text information matched with the language type of the voice information when the third judgment unit judges that the language type of the voice information is matched with any one of a plurality of preset language types, and taking the first text information as information to be processed;
The translation unit is used for sending the voice information to a network server when the third judgment unit judges that the language type of the voice information is not matched with any one of a plurality of preset language types, so that the network server translates the first text information into second text information according to a language model of the preset language, and the second text information is used as the information to be processed;
the extraction unit is specifically configured to extract target information from the information to be processed.
In a second aspect of the embodiment of the present invention, when the first determining unit determines that the sentence type is a preset sentence type, the analyzing generating unit is configured to perform semantic analysis on the information to be replied to generate at least one candidate information for selection by a user, where the method specifically includes:
the analysis generating unit is used for carrying out semantic analysis on the information to be replied when the first judging unit judges that the sentence type is the preset sentence type, and obtaining a semantic analysis result of the information to be replied; selecting related reference information according to the semantic analysis result and the target information, wherein the related reference information comprises schedule information and/or current position information; and generating at least one piece of candidate information for selection by a user according to the semantic analysis result and the related reference information.
In a second aspect of the present invention, when the first determining unit determines that the sentence type is a preset sentence type, the semantic analysis subunit is configured to perform semantic analysis on the information to be replied, and the manner of obtaining the semantic analysis result of the information to be replied specifically is:
the semantic analysis subunit, the analysis generating unit is used for dividing the words of the information to be replied and marking the parts of speech of the divided words when the first judging unit judges that the sentence type is the preset sentence type, so as to obtain a plurality of marked words marked with the parts of speech; calculating the weight value of each labeling word; and determining the labeled words with the weight value larger than or equal to a preset threshold value as keywords, and generating semantic analysis results of the information to be replied according to the keywords.
A third aspect of an embodiment of the present invention discloses a wearable device, including:
a memory storing executable program code;
a processor coupled to the memory;
the processor calls the executable program code stored in the memory to execute the information reply method based on semantic analysis disclosed in the first aspect of the embodiment of the invention.
A fourth aspect of the embodiment of the present invention discloses a computer-readable storage medium storing a computer program, where the computer program causes a computer to execute an information reply method based on semantic analysis disclosed in the first aspect of the embodiment of the present invention.
A fifth aspect of the embodiments of the present invention discloses a computer program product which, when run on a computer, causes the computer to perform part or all of the steps of any one of the methods of the first aspect.
A sixth aspect of the embodiments of the present invention discloses an application publishing platform for publishing a computer program product, wherein the computer program product, when run on a computer, causes the computer to perform part or all of the steps of any one of the methods of the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, the information to be replied is obtained, the target information is extracted from the information to be replied, and the sentence type corresponding to the information to be replied is judged according to the target information; when the sentence type is a preset sentence type, carrying out semantic analysis on the information to be replied to generate at least one piece of candidate information for the user to select; and detecting target candidate information selected by a user from at least one piece of candidate information, and taking the target candidate information as target reply information aiming at the information to be replied. Therefore, by implementing the embodiment of the invention, the sentence type corresponding to the information to be replied is judged according to the target information extracted from the information to be replied, when the sentence type is the preset sentence type, semantic analysis is carried out on the information to be processed, at least one piece of candidate information is generated for the user to select, the operations of manually inputting the information and inputting the information by the user through voice are reduced, the user can select the target candidate information from the at least one piece of candidate information to serve as the target reply information aiming at the information to be replied, the time for inputting the reply information by the user can be saved, the period for communicating the user with other people is shortened, and the communication efficiency and the expression accuracy are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of an information reply method based on semantic analysis according to an embodiment of the present invention;
FIG. 2 is a flow chart of another semantic analysis based information retrieval method according to an embodiment of the present invention;
FIG. 3 is a flow chart of another semantic analysis based information retrieval method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a wearable device according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of another wearable device according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of another wearable device disclosed in an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that the terms "first," "second," and "third," etc. in the description and claims of the present invention are used for distinguishing between different objects and not for describing a particular sequential order. The terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses an information reply method and wearable equipment based on semantic analysis, which can save the time for a user to input reply information, shorten the period for the user to communicate with other people and improve the communication efficiency and the expression accuracy. The following detailed description is made from the perspective of the wearable device with reference to the accompanying drawings.
Example 1
Referring to fig. 1, fig. 1 is a flow chart of an information reply method based on semantic analysis according to an embodiment of the present invention. As shown in fig. 1, the semantic analysis-based information reply method may include the following steps.
101. The wearable device acquires information to be replied.
In the embodiment of the invention, the wearable device can comprise an intelligent watch, an intelligent bracelet, an intelligent glasses and the like, and the embodiment of the invention is not limited.
In the embodiment of the invention, the wearable device can comprise an intelligent host with a display screen, the wearable device can display the information to be replied acquired by the wearable device through the display screen of the intelligent host, and the wearer can read the information to be replied acquired by the wearable device through the display screen of the intelligent host.
102. The wearable device extracts target information from the information to be replied.
Wherein the target information includes at least one of a keyword and a symbol.
In the embodiment of the invention, the wearable device can detect the symbol in the information to be replied, if the symbol exists in the information to be replied, the symbol is extracted, and then the keyword is extracted from the information to be replied; if no symbol exists in the information to be replied, detecting keywords which can be used for determining the sentence types in the information to be replied, and extracting the keywords which can be used for determining the sentence types in the information to be replied.
103. The wearable device judges whether the sentence type corresponding to the information to be replied is the preset sentence type according to the target information, and if so, step 104 is executed; if not, the process is ended.
The preset sentence types comprise question sentences or question-back sentences.
In the embodiment of the invention, the wearable device can judge the sentence type corresponding to the information to be replied according to the keywords and/or the symbols in the target information, wherein the sentence type can be question sentences, back question sentences, positive sentences, exclamation sentences and the like. If the sentence type corresponding to the information to be replied is a question sentence or a back question sentence, executing step 104, wherein the sentence type corresponding to the information to be replied is a preset sentence type; if the sentence type corresponding to the information to be replied is a positive sentence or an exclamatory sentence, the sentence type corresponding to the information to be replied is not the preset sentence type, and the process is ended.
As an optional implementation manner, the wearable device may perform word segmentation on the information to be replied to obtain sentence words of the information to be replied, generate word vectors of the sentence words, and generate a word vector matrix of the information to be replied according to the word vectors of the sentence words; based on the word vector matrix of the information to be replied, the wearable device can determine the sentence type of the information to be replied by using a preset neural network model. The method comprises the steps of analyzing each training sentence in a training corpus to obtain a word vector matrix of each training sentence, and training the preset neural network by utilizing the word vector matrix of each training sentence and each sentence type. According to the embodiment, the sentence type of the sentence to be replied can be determined according to the preset neural network model based on the word vector matrix of the information to be replied, and the sentence type can be accurately judged.
104. When the sentence type is the preset sentence type, the wearable device performs semantic analysis on the information to be replied to generate at least one piece of candidate information for selection by a user.
In the embodiment of the invention, when the sentence type is the preset sentence type, the wearable device can perform semantic analysis on the information to be replied to obtain a semantic analysis result, and the wearable device generates at least one piece of candidate information for the user to select according to the semantic analysis result and by combining factors of the current time and place, and the candidate information can answer the questions in the information to be replied.
105. The wearable device detects target candidate information selected by a user from at least one piece of candidate information, and takes the target candidate information as target reply information aiming at the information to be replied.
In the embodiment of the invention, the wearer reads at least one piece of candidate information from the display screen of the wearable device, and then selects one piece of candidate information from the at least one piece of candidate information as the target candidate information to be replied according to the actual situation of the wearer, so that the operations of manual input and voice input are reduced, and the time for replying the information is saved.
As an alternative embodiment, the wearable device may set monitoring points to monitor the behavioral actions of the user. And after the wearable device performs semantic analysis on the information to be replied to generate at least one piece of candidate information for the user to select, if the fact that the user does not select the candidate information to reply the information to be replied within the preset time is monitored, acquiring the current environment parameter of the user, and selecting the candidate information conforming to the current environment parameter from the at least one piece of candidate information as target reply information by combining the historical reply information before the current moment of the user, or regenerating the target reply information conforming to the current environment parameter. According to the embodiment, when the user does not have idle reply information, the most suitable candidate information can be intelligently selected and sent to the terminal equipment, and the emotion remote caused by the unreturned information can be avoided. For example, after the current environmental parameters of the user are obtained, when the user is identified as being currently located in a certain mall according to the current environmental parameters of the user, if the user does not reply within a predetermined time, the user can select "i reply your in the mall at a later time" or generate the information to reply to each other first.
In addition, the wearable device sends distress information or alarm information to the terminal device according to the current environment parameters if the user is determined to be in the dangerous environment.
As an optional implementation manner, the wearable device receives the information to be replied to; if the current state of the wearable equipment is a screen locking state, displaying the information to be replied and related information of the information to be replied on a screen locking state, wherein the related information comprises a contact name, a number and the like; the wearable device further extracts target information from the information to be replied, and judges the sentence type corresponding to the information to be replied according to the target information; when the sentence type is a preset sentence type, carrying out semantic analysis on the information to be replied to generate at least one piece of candidate information for the user to select; and displaying the generated at least one piece of candidate information on the screen locking interface, wherein the user can select target candidate information from the at least one piece of candidate information on the screen locking interface as target reply information aiming at the information to be replied. Meanwhile, the wearable device can also preset a prompt background for prompting existence of information to be replied and displaying the quantity of the information to be replied. According to the embodiment, whether the information to be replied exists or not can be judged according to the preset prompt background of the screen locking, the quantity of the information to be replied is displayed, when the information to be replied is replied, the information can be replied without entering an application program, and the candidate information can be directly selected as target reply information at the screen locking interface for reply, so that the information reply speed is obviously improved.
As an optional implementation manner, the wearable device may preset the information type of the information to be replied, and divide the information to be replied into the information type to be processed and the information type not to be processed. When the wearable device performs semantic analysis on the information to be replied, judging whether the information to be replied belongs to the information type to be processed or the information type not to be processed at the same time, if the information to be replied belongs to the information type to be processed, generating at least one piece of candidate information according to the semantic analysis result for the user to select reply; if the information to be replied belongs to the type of the information which does not need to be processed, the information to be replied is automatically ignored. According to the embodiment, some information which is not required to be processed can be automatically ignored, candidate information selection is carried out on important information which is required to be processed, and quick reply is carried out, so that time for checking the information which is not required to be processed is saved, and the information reply efficiency is improved.
It will be appreciated that for the children or the elderly, the character recognition and character input capability may be relatively weak, and even the expression capability for official languages such as mandarin is poor, and further, in some embodiments, the above step 104 may be implemented as follows:
Outputting consultation information for indicating a user to input any voice detection information when the sentence type is a preset sentence type;
after voice detection information input by a user is detected, voiceprint features are extracted from the voice detection information, and the age of the user is identified according to the voiceprint features;
when the age of the user is in the age range for representing children or old people, detecting whether the language type of the voice detection information is matched with any one of a plurality of preset language types;
if so, converting the information to be replied into voice information to be played, the voice information to be played of which the language types of the voice detection information are matched, and playing the voice information to be played so that a user can know the content of the information to be replied;
generating at least one piece of candidate information according to the information to be replied, converting the at least one piece of candidate information into at least one piece of candidate voice information matched with the language type of the voice detection information, wherein one piece of candidate voice information corresponds to one piece of candidate information;
and sequentially playing the at least one piece of candidate voice information.
Further, in step 105, detecting the target candidate information selected by the user from the at least one piece of candidate information, and using the target candidate information as the target reply information for the information to be replied includes:
Detecting target reply voice information input by a user, and taking the target reply voice information as target reply information aiming at the information to be replied, wherein the target reply voice information is any one of at least one piece of candidate voice information.
Through the mode, the group with poor official language expression capability (only good at local language) or weak text input capability such as old people or children can be guaranteed, communication can be realized by using the wearable device comfortably, and the use experience of a user is improved.
As can be seen, implementing the semantic-based information reply method described in fig. 1 may obtain information to be replied, extract target information from the information to be replied, then determine whether a sentence type corresponding to the information to be replied is a preset sentence type according to the target information, perform semantic analysis on the information to be replied when the sentence type is determined to be the preset sentence type, generate at least one candidate information for selection by a user, and then detect the target candidate information obtained by the user selecting from the at least one candidate information, and use the target candidate information as the target reply information for the information to be replied, so that operations of manual input and voice input of the user can be reduced, time for inputting the reply information by the user can be saved, a period for communicating between the user and other people can be shortened, and efficiency of communication and accuracy of expression can be improved.
Example two
Referring to fig. 2, fig. 2 is a flow chart of another information reply method based on semantic analysis according to an embodiment of the present invention. As shown in fig. 2, the semantic analysis based information reply method may include the following steps.
201. The wearable device acquires information to be replied.
202. The wearable device identifies the information to be replied, judges whether the information to be replied is voice information, and if so, executes step 203; if not, step 207 is performed.
203. The wearable device recognizes the language category of the voice information.
204. The wearable device judges whether the language category is matched with any one of a plurality of preset language categories. If so, step 205 is performed; if there is no match, step 206 is performed.
205. The wearable device converts the voice information into first text information matched with the language type, and takes the first text information as information to be processed.
206. The wearable device sends the voice information to the network server, so that the network server translates the first text information into second text information according to a language model of a preset language, and the second text information is used as information to be processed.
207. The wearable device extracts target information from information to be replied, wherein the information to be replied comprises information to be processed.
The target information comprises at least one of keywords and symbols, and the target information is extracted from the information to be replied, including the target information is extracted from the information to be processed.
208. When the wearable device determines whether the sentence type corresponding to the information to be replied is the preset sentence type according to the target information, if so, step 209 is executed; if not, the process is ended.
The preset sentence types comprise question sentences or question-back sentences.
In the embodiment of the invention, the wearable device can judge the sentence type corresponding to the information to be replied according to the keywords and/or the symbols in the target information, wherein the sentence type can be question sentences, back question sentences, positive sentences and exclamation sentences. If the sentence type corresponding to the information to be replied is a question sentence or a back question sentence, executing step 209, wherein the sentence type corresponding to the information to be replied is a preset sentence type; if the sentence type corresponding to the information to be replied is a positive sentence or an exclamatory sentence, the sentence type corresponding to the information to be replied is not the preset sentence type, and the process is ended.
209. The wearable device performs semantic analysis on the information to be replied to obtain a semantic analysis result of the information to be replied.
210. And the wearable device selects related reference information according to the semantic analysis result and the target information.
Wherein the relevant reference information comprises schedule information and/or current location information.
211. The wearable device generates at least one piece of candidate information for selection by a user according to the semantic analysis result and the related reference information.
212. The wearable device detects target candidate information selected by a user from at least one piece of candidate information, and takes the target candidate information as target reply information aiming at the information to be replied.
It can be seen that, implementing the semantic-based information reply method described in fig. 2 may obtain information to be replied, determine whether the information to be replied is voice information, identify a language type of the voice information when the information to be replied is determined to be voice information, convert the voice information into text information matched with a preset language type, and use the text information as information to be processed, wherein extracting the target information from the information to be replied includes extracting the target information from the information to be processed, then determining whether a sentence type corresponding to the information to be replied is a preset sentence type according to the target information, performing semantic analysis on the information to be replied when the sentence type is determined to be the preset sentence type, obtaining a semantic analysis result, generating at least one candidate information for user selection according to the semantic analysis result and related reference information, and then detecting the target candidate information selected by the user from the at least one candidate information by the wearable device, wherein the target candidate information is used as the target reply information for the information to be replied, so as to reduce operations of manual input and voice input of the user, save time of inputting the reply information by the user, shorten a period of exchanging between the user and other people, and improve efficiency and expression accuracy.
Example III
Referring to fig. 3, fig. 3 is a flow chart of another information reply method based on semantic analysis according to an embodiment of the present invention. As shown in fig. 3, the semantic analysis based information reply method may include the following steps.
301. The wearable device acquires information to be replied.
302. The wearable device identifies the information to be replied, judges whether the information to be replied is voice information, and if so, executes step 303; if not, step 307 is performed.
303. The wearable device recognizes the language category of the voice information.
304. The wearable device judges whether the language category is matched with any one of a plurality of preset language categories. If so, step 305 is performed; if there is no match, step 306 is performed.
305. The wearable device converts the voice information into first text information matched with the language type, and takes the first text information as information to be processed.
306. The wearable device sends the voice information to the network server, so that the network server translates the first text information into second text information according to a language model of a preset language, and the second text information is used as information to be processed.
307. The wearable device extracts target information from information to be replied, wherein the information to be replied comprises information to be processed.
The target information comprises at least one of keywords and symbols, and the target information is extracted from the information to be replied, including the target information is extracted from the information to be processed.
308. When the wearable device determines whether the sentence type corresponding to the information to be replied is the preset sentence type according to the target information, if so, step 309 is executed; if not, the process is ended.
The preset sentence types comprise question sentences or question-back sentences.
In the embodiment of the invention, the wearable device can judge the sentence type corresponding to the information to be replied according to the keywords and/or the symbols in the target information, wherein the sentence type can be question sentences, back question sentences, positive sentences and exclamation sentences. If the sentence type corresponding to the information to be replied is a question sentence or a back question sentence, executing step 309, wherein the sentence type corresponding to the information to be replied is a preset sentence type; if the sentence type corresponding to the information to be replied is a positive sentence or an exclamatory sentence, the sentence type corresponding to the information to be replied is not the preset sentence type, and the process is ended.
309. The wearable equipment divides words of the information to be replied, marks the divided words with parts of speech, and obtains a plurality of marked words marked with parts of speech.
310. The wearable device calculates a weight value for each tagged word.
311. The wearable device determines the labeled words with the weight values larger than or equal to a preset threshold value as keywords, and generates semantic analysis results of the information to be replied according to the keywords.
As an optional implementation manner, the wearable device can acquire semantic resources, analyze and process the semantic resources, build a semantic analysis model, then import the acquired information to be replied into the semantic analysis model, obtain semantic analysis results of keywords in the information to be replied according to the matching condition of the keywords in the information to be replied in the semantic analysis model, and integrate the semantic analysis results of all the keywords in the information to be replied to generate the semantic analysis results of the information to be replied. According to the embodiment, the semantic analysis result of the keyword in the information to be replied can be obtained according to the semantic analysis model, so that the semantic analysis result of the information to be replied is integrated, and the efficiency of the semantic analysis process can be improved.
312. And the wearable device selects related reference information according to the semantic analysis result and the target information.
Wherein the relevant reference information comprises schedule information and/or current location information.
313. The wearable device generates at least one piece of candidate information for selection by a user according to the semantic analysis result and the related reference information.
314. The wearable device detects target candidate information selected by a user from at least one piece of candidate information, and takes the target candidate information as target reply information aiming at the information to be replied.
As can be seen, implementing the semantic-based information reply method described in fig. 3 may obtain information to be replied, determine whether the information to be replied is voice information, identify a language type of the voice information when the information to be replied is determined to be voice information, convert the voice information into text information matched with a preset language type, and use the text information as information to be processed, where extracting target information from the information to be replied includes extracting target information from the information to be processed; judging whether the sentence type corresponding to the information to be replied is a preset sentence type or not according to the target information, when judging that the sentence type is the preset sentence type, performing word segmentation processing on the information to be replied, performing part-of-speech tagging on the divided words, determining keywords through the weight value of the tagged words, and generating a semantic analysis result according to the keywords; at least one piece of candidate information is generated according to the semantic analysis result and the related reference information for selection by a user, then the wearable device detects target candidate information selected by the user from the at least one piece of candidate information, and the target candidate information is used as target reply information aiming at the information to be replied, so that the operations of manual input and voice input of the user can be reduced, the time for inputting the reply information by the user is saved, the period for communicating the user with other people is shortened, and the communication efficiency and the expression accuracy are improved.
Example IV
Referring to fig. 4, fig. 4 is a schematic structural diagram of a wearable device according to an embodiment of the present invention. As shown in fig. 4, the wearable device may include:
the obtaining unit 401 is configured to obtain information to be replied.
In the embodiment of the invention, the wearable device can comprise an intelligent watch, an intelligent bracelet, an intelligent glasses and the like, and the embodiment of the invention is not limited.
In the embodiment of the invention, the wearable device comprises an intelligent host with a display screen, the obtaining unit 401 can display the information to be replied acquired by the obtaining unit 401 through the display screen of the intelligent host after obtaining the information to be replied, and the wearer can read the information to be replied acquired by the wearable device through the display screen of the intelligent host.
An extracting unit 402, configured to extract target information from the information to be replied.
Wherein the target information includes at least one of a keyword and a symbol.
In the embodiment of the present invention, the extracting unit 402 is configured to detect a symbol in the information to be replied, extract the symbol if the symbol exists in the information to be replied, and then extract a keyword from the detected information to be replied; if no symbol exists in the information to be replied, detecting keywords which can be used for determining the sentence types in the information to be replied, and extracting the keywords which can be used for determining the sentence types in the information to be replied.
The first judging unit 403 is configured to judge a sentence type corresponding to the information to be replied according to the target information.
The preset sentence types comprise question sentences or question-back sentences.
In the embodiment of the present invention, the first judging unit 403 is configured to judge, according to the keywords and/or the symbols in the target information, a sentence type corresponding to the information to be replied, where the sentence type may be a question, an opposite question, a positive sentence, an exclamation, and so on. If the sentence type corresponding to the information to be replied is a question sentence or a back question sentence, the sentence type corresponding to the information to be replied is a preset sentence type; if the sentence type corresponding to the information to be replied is a positive sentence or an exclamatory sentence, the sentence type corresponding to the information to be replied is not the preset sentence type.
As an optional implementation manner, the first judging unit 403 is configured to perform word segmentation on the information to be replied to obtain sentence words of the information to be replied, generate word vectors of the sentence words, and generate a word vector matrix of the information to be replied according to the word vectors of the sentence words; based on the word vector matrix of the information to be replied, determining the sentence type of the information to be replied by using a preset neural network model. The method comprises the steps of analyzing each training sentence in a training corpus to obtain a word vector matrix of each training sentence, and training the preset neural network by utilizing the word vector matrix of each training sentence and each sentence type. According to the embodiment, the sentence type of the sentence to be replied can be determined according to the preset neural network model based on the word vector matrix of the information to be replied, and the sentence type can be accurately judged.
An analysis generating unit 404, configured to perform semantic analysis on the information to be replied to generate at least one candidate information for selection by the user when the first determining unit 403 determines that the sentence type is a preset sentence type.
In the embodiment of the present invention, when the sentence type is a preset sentence type, the analysis generating unit 404 is configured to perform semantic analysis on the information to be replied to obtain a semantic analysis result, and the wearable device generates at least one piece of candidate information for the user to select according to the semantic analysis result and by combining factors of the current time and place, where the candidate information can answer the question in the information to be replied.
The detecting unit 405 is configured to detect target candidate information selected by a user from at least one piece of candidate information, and take the target candidate information as target reply information for the information to be replied.
In the embodiment of the present invention, the detection unit 405 is configured to detect that a wearer reads at least one piece of candidate information from a display screen of a wearable device, and then select, in combination with the actual situation of the wearer, one piece of candidate information from the at least one piece of candidate information as target candidate information to be used as target reply information for the information to be replied, so that operations of manual input and voice input are reduced, and time for replying the information is saved.
As an alternative embodiment, the detection unit 405 is configured to set a monitoring point to monitor the behavioral actions of the user. Furthermore, after the analysis generating unit 404 performs semantic analysis on the information to be replied to generate at least one piece of candidate information for the user to select, if it is monitored that the user does not select the candidate information to reply to the information to be replied within a predetermined time, the detecting unit 405 obtains the current environmental parameter of the user, and selects, from the at least one piece of candidate information, the candidate information conforming to the current environmental parameter as the target reply information, or regenerates the target reply information conforming to the current environmental parameter in combination with the historical reply information before the current time of the user. According to the embodiment, when the user does not have idle reply information, the most suitable candidate information can be intelligently selected and sent to the terminal equipment, and the emotion remote caused by the unreturned information can be avoided. For example, after the current environmental parameters of the user are obtained, when the user is identified as being currently located in a certain mall according to the current environmental parameters of the user, if the user does not reply within a predetermined time, the user can select "i reply your in the mall at a later time" or generate the information to reply to each other first.
In addition, the detection unit 405 sends distress information or alarm information to the terminal device if it is determined that the user is in a dangerous environment according to the current environmental parameters.
As an alternative embodiment, the obtaining unit 401 receives the information to be replied; if the current state of the wearable equipment is a screen locking state, displaying the information to be replied and related information of the information to be replied on a screen locking state, wherein the related information comprises a contact name, a number and the like; the extracting unit 402 further extracts target information from the information to be replied, and judges the sentence type corresponding to the information to be replied according to the target information; when the sentence type is a preset sentence type, the analysis generating unit 404 performs semantic analysis on the information to be replied to generate at least one piece of candidate information for the user to select; and displaying the generated at least one piece of candidate information on the lock screen interface, the detection unit 405 detects that the target candidate information which can be selected by the user from the at least one piece of candidate information on the lock screen interface is used as target reply information for the information to be replied. Meanwhile, the obtaining unit 401 may also be configured to preset a prompt background for prompting that there is information to be replied, and display the number of the information to be replied. According to the embodiment, whether the information to be replied exists or not can be judged according to the preset prompt background of the screen locking, the quantity of the information to be replied is displayed, when the information to be replied is replied, the information can be replied without entering an application program, and the candidate information can be directly selected as target reply information at the screen locking interface for reply, so that the information reply speed is obviously improved.
As an alternative embodiment, the obtaining unit 401 may preset the information type of the information to be replied, and divide the information to be replied into the information type to be processed and the information type not to be processed. The analysis generating unit 404 simultaneously judges whether the information to be replied belongs to the information type to be processed or the information type not to be processed when carrying out semantic analysis on the information to be replied, and generates at least one piece of candidate information for the user to select and reply according to the semantic analysis result if the information to be replied belongs to the information type to be processed; if the information to be replied belongs to the type of the information which does not need to be processed, the information to be replied is automatically ignored. According to the embodiment, some information which is not required to be processed can be automatically ignored, candidate information selection is carried out on important information which is required to be processed, and quick reply is carried out, so that time for checking the information which is not required to be processed is saved, and the information reply efficiency is improved.
It will be appreciated that for the crowd such as children or the elderly, the text recognition and text input capability may be relatively weak, and even the expression capability for the official language such as mandarin is poor, and further, in some embodiments, the analysis generation unit 404 is specifically configured to:
Outputting consultation information for indicating a user to input any voice detection information when the sentence type is a preset sentence type; after voice detection information input by a user is detected, extracting voiceprint features from the voice detection information, and identifying the age of the user according to the voiceprint features;
when the age of the user is in the age range for representing children or old people, detecting whether the language type of the voice detection information is matched with any one of a plurality of preset language types; if so, converting the information to be replied into voice information to be played, the voice information to be played of which the language types of the voice detection information are matched, and playing the voice information to be played so that a user can know the content of the information to be replied; generating at least one piece of candidate information according to the information to be replied, converting the at least one piece of candidate information into at least one piece of candidate voice information matched with the language type of the voice detection information, wherein one piece of candidate voice information corresponds to one piece of candidate information; and sequentially playing the at least one piece of candidate voice information.
Further, the detecting unit 405 is configured to detect target candidate information selected by a user from at least one piece of candidate information, and take the target candidate information as target reply information for the information to be replied, including:
Detecting target reply voice information input by a user, and taking the target reply voice information as target reply information aiming at the information to be replied, wherein the target reply voice information is any one of at least one piece of candidate voice information.
Through the mode, the group with poor official language expression capability (only good at local language) or weak text input capability such as old people or children can be guaranteed, communication can be realized by using the wearable device comfortably, and the use experience of a user is improved.
As can be seen, implementing the wearable device shown in fig. 4 may obtain information to be replied, extract target information from the information to be replied, then determine whether a sentence type corresponding to the information to be replied is a preset sentence type according to the target information, perform semantic analysis on the information to be replied when the sentence type is determined to be the preset sentence type, generate at least one piece of candidate information for user selection, and then detect target candidate information obtained by selecting the target candidate information from the at least one piece of candidate information by a user, and use the target candidate information as the target reply information for the information to be replied, so that operations of manual input and voice input of the user can be reduced, time for inputting the reply information by the user is saved, a period for communicating with other people is shortened, and efficiency and accuracy of communication are improved.
Example five
Referring to fig. 5, fig. 5 is a schematic structural diagram of another wearable device according to an embodiment of the present invention. The wearable device shown in fig. 5 is optimized by the wearable device shown in fig. 4. In comparison to the wearable device shown in fig. 4, the wearable device shown in fig. 5 may further include:
a second judging unit 406, configured to identify the information to be replied after the obtaining unit 401 obtains the information to be replied and before the extracting unit 402 extracts the target information from the information to be replied, and judge whether the information to be replied is voice information;
a recognition unit 407, configured to recognize a language type of the voice information when the second determination unit 406 determines that the information to be replied is the voice information;
a third judging unit 408, configured to judge whether the language type of the voice information matches any one of a plurality of preset language types;
a conversion unit 409, configured to convert the voice information into first text information that matches the language type of the voice information and use the first text information as information to be processed when the third determination unit 408 determines that the language type of the voice information matches any one of a plurality of preset language types;
A translation unit 410, configured to send the voice information to the web server when the third determination unit 408 determines that the language type of the voice information does not match any one of the plurality of preset language types, so that the web server translates the first text information into the second text information according to the language model of the preset language, and uses the second text information as the information to be processed;
the extracting unit 402 is specifically configured to extract target information from information to be processed.
The analysis generating unit 404 is specifically configured to perform semantic analysis on the information to be replied when the first judging unit judges that the sentence type is a preset sentence type, and obtain a semantic analysis result of the information to be replied; selecting related reference information according to the semantic analysis result and the target information, wherein the related reference information comprises schedule information and/or current position information; and generating at least one piece of candidate information for selection by a user according to the semantic analysis result and the related reference information.
The analysis generating unit 404 is specifically configured to divide the words of the information to be replied when the first judging unit judges that the sentence type is a preset sentence type, and label the divided words with parts of speech to obtain a plurality of labeled words labeled with parts of speech; calculating the weight value of each labeling word; and determining the labeled words with the weight value larger than or equal to a preset threshold value as keywords, and generating semantic analysis results of the information to be replied according to the keywords.
As an optional implementation manner, when the first determining unit determines that the sentence type is the preset sentence type, the analysis generating unit 404 is configured to perform semantic analysis on the information to be replied, and the manner of obtaining the semantic analysis result of the information to be replied may be: the method comprises the steps of obtaining semantic resources, analyzing and processing the semantic resources, constructing a semantic analysis model, then importing the obtained information to be replied into the semantic analysis model, obtaining semantic analysis results of keywords in the information to be replied according to the matching condition of the keywords in the information to be replied in the semantic analysis model, and integrating the semantic analysis results of all the keywords in the information to be replied to generate the semantic analysis results of the information to be replied. According to the embodiment, the semantic analysis result of the keyword in the information to be replied can be obtained according to the semantic analysis model, so that the semantic analysis result of the information to be replied is integrated, and the efficiency of the semantic analysis process can be improved.
As can be seen, implementing the wearable device shown in fig. 5 may obtain information to be replied, determine whether the information to be replied is voice information, identify a language type of the voice information when the information to be replied is determined to be the voice information, convert the voice information into text information matched with a preset language type, and use the text information as information to be processed, where extracting target information from the information to be replied includes extracting target information from the information to be processed; judging whether the sentence type corresponding to the information to be replied is a preset sentence type or not according to the target information, when judging that the sentence type is the preset sentence type, performing word segmentation processing on the information to be replied, performing part-of-speech tagging on the divided words, determining keywords through weight values of the tagged words, generating a semantic analysis result according to the keywords, and generating at least one piece of candidate information according to the semantic analysis result and related reference information for selection of a user; and then, the wearable equipment detects target candidate information selected by the user from at least one piece of candidate information, and takes the target candidate information as target reply information aiming at the information to be recovered, so that the operations of manual input and voice input of the user can be reduced, the time for inputting the reply information by the user is saved, the period for communicating the user with other people is shortened, and the communication efficiency and the expression accuracy are improved.
Example six
Referring to fig. 6, fig. 6 is a schematic structural diagram of another wearable device according to an embodiment of the present invention. As shown in fig. 6, the wearable device may include:
a memory 601 in which executable program codes are stored;
a processor 602 coupled to the memory 601;
the processor 602 invokes executable program codes stored in the memory 601 to execute any one of the semantic analysis-based information retrieval methods of fig. 1 to 3.
The embodiment of the invention discloses a computer readable storage medium which stores a computer program, wherein the computer program enables a computer to execute any one of the information reply methods based on semantic analysis shown in fig. 1-3.
The embodiments of the present invention also disclose a computer program product, wherein the computer program product, when run on a computer, causes the computer to perform some or all of the steps of the method as in the method embodiments above.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the above embodiments may be implemented by a program that instructs associated hardware, the program may be stored in a computer readable storage medium including Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disk Memory, magnetic disk Memory, tape Memory, or any other medium that can be used for carrying or storing data that is readable by a computer.
The above describes in detail an information reply method and a wearable device based on semantic analysis disclosed in the embodiments of the present invention, and specific examples are applied to describe the principles and implementations of the present invention, where the description of the above embodiments is only used to help understand the method and core ideas of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (8)

1. An information reply method based on semantic analysis, the method comprising:
obtaining information to be replied;
identifying the information to be replied, and judging whether the information to be replied is voice information or not;
when the information to be replied is judged to be the voice information, recognizing the language type of the voice information;
judging whether the language type of the voice information is matched with any one of a plurality of preset language types;
if so, converting the voice information into first text information matched with the language type of the voice information, and taking the first text information as information to be processed;
If the first text information is not matched with the second text information, the voice information is sent to a network server, so that the network server translates the first text information into the second text information according to a language model of a preset language, and the second text information is used as the information to be processed;
extracting target information from the information to be processed, wherein the target information comprises at least one of keywords and symbols;
judging the sentence type corresponding to the information to be replied according to the target information;
when the sentence type is a preset sentence type, carrying out semantic analysis on the information to be replied to generate at least one piece of candidate information for selection by a user; the preset sentence types comprise question sentences or back question sentences;
and detecting target candidate information selected by the user from the at least one piece of candidate information, and taking the target candidate information as target reply information aiming at the information to be replied.
2. The method of claim 1, wherein when the sentence type is a preset sentence type, performing semantic analysis on the information to be replied to generate at least one candidate information for selection by a user, including:
Outputting consultation information for indicating a user to input any voice detection information when the sentence type is a preset sentence type;
after voice detection information input by the user is detected, extracting voiceprint features from the voice detection information, and identifying the age of the user according to the voiceprint features;
when the age of the user is in the age range for representing children or old people, detecting whether the language type of the voice detection information is matched with any one of the plurality of preset language types;
if so, converting the information to be replied into voice information to be played, the types of the languages of which are matched with those of the voice detection information, and playing the voice information to be played so that the user can know the content of the information to be replied;
generating at least one piece of candidate information according to the information to be replied, converting the at least one piece of candidate information into at least one piece of candidate voice information matched with the language type of the voice detection information, wherein one piece of candidate voice information corresponds to one piece of candidate information;
sequentially playing the at least one piece of candidate voice information;
the detecting the target candidate information selected by the user from the at least one piece of candidate information, taking the target candidate information as target reply information aiming at the information to be replied, includes:
Detecting target reply voice information input by the user, and taking the target reply voice information as target reply information aiming at the information to be replied, wherein the target reply voice information is any one candidate voice information in the at least one candidate voice information.
3. The method of claim 1, wherein when the sentence type is a preset sentence type, performing semantic analysis on the information to be replied to generate at least one candidate information for selection by a user, including:
when the sentence type is a preset sentence type, carrying out semantic analysis on the information to be replied to obtain a semantic analysis result of the information to be replied;
selecting related reference information according to the semantic analysis result and the target information, wherein the related reference information comprises schedule information and/or current position information;
and generating at least one piece of candidate information for selection by a user according to the semantic analysis result and the related reference information.
4. The method of claim 3, wherein when the sentence type is a preset sentence type, performing semantic analysis on the information to be replied to obtain a semantic analysis result of the information to be replied to, including:
When the sentence type is a preset sentence type, dividing words of the information to be replied, and marking the parts of speech of the divided words to obtain a plurality of marked words marked with the parts of speech;
calculating the weight value of each labeling word;
and determining the labeled words with the weight values larger than or equal to a preset threshold value as keywords, and generating semantic analysis results of the information to be replied according to the keywords.
5. A wearable device, comprising:
the acquisition unit is used for acquiring the information to be replied;
the second judging unit is used for identifying the information to be replied and judging whether the information to be replied is voice information or not;
the identification unit is used for identifying the language type of the voice information when the second judgment unit judges that the information to be replied is the voice information;
the third judging unit is used for judging whether the language type of the voice information is matched with any one of a plurality of preset language types;
the conversion unit is used for converting the voice information into first text information matched with the language type of the voice information when the third judgment unit judges that the language type of the voice information is matched with any one of the preset language types, and taking the first text information as information to be processed;
The translation unit is used for sending the voice information to a network server when the third judgment unit judges that the language type of the voice information is not matched with any one of the preset language types, so that the network server translates the first text information into second text information according to a language model of the preset language, and takes the second text information as the information to be processed;
an extracting unit for extracting target information from the information to be processed, wherein the target information comprises at least one of keywords and symbols;
the first judging unit is used for judging the sentence type corresponding to the information to be replied according to the target information;
the analysis generating unit is used for carrying out semantic analysis on the information to be replied when the first judging unit judges that the sentence type is the preset sentence type, and generating at least one piece of candidate information for the user to select; the preset sentence types comprise question sentences or back question sentences;
the detection unit is used for detecting target candidate information selected by the user from the at least one piece of candidate information, and taking the target candidate information as target reply information aiming at the information to be replied.
6. The wearable device of claim 5, wherein the analysis generating unit is configured to perform semantic analysis on the information to be replied when the first judging unit judges that the sentence type is a preset sentence type, and the manner of generating at least one piece of candidate information for selection by a user is specifically:
the analysis generating unit is used for outputting consultation information for indicating a user to input any voice detection information when the first judging unit judges that the sentence type is the preset sentence type; after voice detection information input by the user is detected, extracting voiceprint features from the voice detection information, and identifying the age of the user according to the voiceprint features; and detecting whether the language type of the voice detection information is matched with any one of the plurality of preset language types when the age of the user is within an age range for representing children or old people; if so, converting the information to be replied into voice information to be played, the types of which are matched with the types of the voice detection information, and playing the voice information to be played so that the user can know the content of the information to be replied; generating at least one piece of candidate information according to the information to be replied, converting the at least one piece of candidate information into at least one piece of candidate voice information matched with the language type of the voice detection information, wherein one piece of candidate voice information corresponds to one piece of candidate information; sequentially playing the at least one piece of candidate voice information;
The detection unit is specifically configured to detect target reply voice information input by the user, and take the target reply voice information as target reply information for the information to be replied, where the target reply voice information is any one candidate voice information in the at least one candidate voice information.
7. The wearable device of claim 5, wherein the analysis generating unit is configured to perform semantic analysis on the information to be replied when the first judging unit judges that the sentence type is a preset sentence type, and the manner of generating at least one piece of candidate information for selection by a user is specifically:
the analysis generating unit is used for carrying out semantic analysis on the information to be replied when the first judging unit judges that the sentence type is the preset sentence type, and obtaining a semantic analysis result of the information to be replied; selecting related reference information according to the semantic analysis result and the target information, wherein the related reference information comprises schedule information and/or current position information; and generating at least one piece of candidate information for selection by a user according to the semantic analysis result and the related reference information.
8. The wearable device of claim 7, wherein the analysis generating unit is configured to perform semantic analysis on the information to be replied when the first judging unit judges that the sentence type is a preset sentence type, and the manner of obtaining the semantic analysis result of the information to be replied is specifically as follows:
the analysis generating unit is used for dividing the words of the information to be replied when the first judging unit judges that the sentence type is the preset sentence type, and marking the parts of speech of the divided words to obtain a plurality of marked words marked with the parts of speech; calculating the weight value of each labeling word; and determining the labeled words with the weight value larger than or equal to a preset threshold value as keywords, and generating semantic analysis results of the information to be replied according to the keywords.
CN201811281974.6A 2018-10-31 2018-10-31 Information reply method based on semantic analysis and wearable equipment Active CN109492221B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811281974.6A CN109492221B (en) 2018-10-31 2018-10-31 Information reply method based on semantic analysis and wearable equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811281974.6A CN109492221B (en) 2018-10-31 2018-10-31 Information reply method based on semantic analysis and wearable equipment

Publications (2)

Publication Number Publication Date
CN109492221A CN109492221A (en) 2019-03-19
CN109492221B true CN109492221B (en) 2023-06-30

Family

ID=65691787

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811281974.6A Active CN109492221B (en) 2018-10-31 2018-10-31 Information reply method based on semantic analysis and wearable equipment

Country Status (1)

Country Link
CN (1) CN109492221B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110120976A (en) * 2019-05-10 2019-08-13 南京硅基智能科技有限公司 A kind of the information intelligent receiving/transmission method and system of the virtual mobile phone based on cloud
CN110170081B (en) * 2019-05-14 2021-09-07 广州医软智能科技有限公司 ICU instrument alarm processing method and system
CN112003778B (en) * 2020-07-17 2023-02-28 北京百度网讯科技有限公司 Message processing method, device, equipment and computer storage medium
CN111916052B (en) * 2020-07-30 2021-04-27 北京声智科技有限公司 Voice synthesis method and device
CN112365892A (en) * 2020-11-10 2021-02-12 杭州大搜车汽车服务有限公司 Man-machine interaction method, device, electronic device and storage medium
CN114722300A (en) * 2022-06-07 2022-07-08 深圳追一科技有限公司 Message reminding method and device, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107818781A (en) * 2017-09-11 2018-03-20 远光软件股份有限公司 Intelligent interactive method, equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5519608A (en) * 1993-06-24 1996-05-21 Xerox Corporation Method for extracting from a text corpus answers to questions stated in natural language by using linguistic analysis and hypothesis generation
JP3715470B2 (en) * 1999-06-30 2005-11-09 株式会社東芝 Response generation apparatus, dialogue management apparatus, response generation method, and computer-readable recording medium storing response generation program
US7711570B2 (en) * 2001-10-21 2010-05-04 Microsoft Corporation Application abstraction with dialog purpose
US10262062B2 (en) * 2015-12-21 2019-04-16 Adobe Inc. Natural language system question classifier, semantic representations, and logical form templates
CN105447207B (en) * 2016-01-08 2018-07-31 北京光年无限科技有限公司 A kind of question and answer exchange method and system towards intelligent robot
CN107729468B (en) * 2017-10-12 2019-12-17 华中科技大学 answer extraction method and system based on deep learning
CN108170835A (en) * 2018-01-12 2018-06-15 深圳市富途网络科技有限公司 A kind of intelligent customer service system of combinatorial artificial and AI

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107818781A (en) * 2017-09-11 2018-03-20 远光软件股份有限公司 Intelligent interactive method, equipment and storage medium

Also Published As

Publication number Publication date
CN109492221A (en) 2019-03-19

Similar Documents

Publication Publication Date Title
CN109492221B (en) Information reply method based on semantic analysis and wearable equipment
US10438586B2 (en) Voice dialog device and voice dialog method
CN109346059B (en) Dialect voice recognition method and electronic equipment
CN104598644B (en) Favorite label mining method and device
CN110597952A (en) Information processing method, server, and computer storage medium
CN107122807B (en) Home monitoring method, server and computer readable storage medium
CN108388553B (en) Method for eliminating ambiguity in conversation, electronic equipment and kitchen-oriented conversation system
CN111179935B (en) Voice quality inspection method and device
CN109299399B (en) Learning content recommendation method and terminal equipment
CN108304387B (en) Method, device, server group and storage medium for recognizing noise words in text
CN108766431B (en) Automatic awakening method based on voice recognition and electronic equipment
CN108009297B (en) Text emotion analysis method and system based on natural language processing
CN110287318B (en) Service operation detection method and device, storage medium and electronic device
CN115050077A (en) Emotion recognition method, device, equipment and storage medium
CN111881297A (en) Method and device for correcting voice recognition text
CN112053692A (en) Speech recognition processing method, device and storage medium
CN110955818A (en) Searching method, searching device, terminal equipment and storage medium
CN112256827A (en) Sign language translation method and device, computer equipment and storage medium
CN112232276A (en) Emotion detection method and device based on voice recognition and image recognition
CN110874534A (en) Data processing method and data processing device
CN110956958A (en) Searching method, searching device, terminal equipment and storage medium
CN109408175B (en) Real-time interaction method and system in general high-performance deep learning calculation engine
CN108305629B (en) Scene learning content acquisition method and device, learning equipment and storage medium
CN112581297B (en) Information pushing method and device based on artificial intelligence and computer equipment
CN113051384B (en) User portrait extraction method based on dialogue and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant