CN110275691A - Automatic reply method, device, terminal and the storage medium that intelligent sound wakes up - Google Patents
Automatic reply method, device, terminal and the storage medium that intelligent sound wakes up Download PDFInfo
- Publication number
- CN110275691A CN110275691A CN201810213878.1A CN201810213878A CN110275691A CN 110275691 A CN110275691 A CN 110275691A CN 201810213878 A CN201810213878 A CN 201810213878A CN 110275691 A CN110275691 A CN 110275691A
- Authority
- CN
- China
- Prior art keywords
- information
- intelligent sound
- keyword
- services
- address
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000003860 storage Methods 0.000 title claims abstract description 23
- 238000004590 computer program Methods 0.000 claims description 17
- 238000004458 analytical method Methods 0.000 claims description 10
- 230000002123 temporal effect Effects 0.000 claims description 8
- 230000001052 transient effect Effects 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 5
- 239000004615 ingredient Substances 0.000 claims description 2
- 230000001360 synchronised effect Effects 0.000 claims 1
- 230000004044 response Effects 0.000 abstract description 21
- 230000006870 function Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000036651 mood Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000002618 waking effect Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000014759 maintenance of location Effects 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 206010019345 Heat stroke Diseases 0.000 description 1
- 208000007180 Sunstroke Diseases 0.000 description 1
- 230000033228 biological regulation Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000005266 casting Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000011109 contamination Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 230000002996 emotional effect Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 239000003607 modifier Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
- 238000010792 warming Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/52—Network services specially adapted for the location of the user terminal
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04J—MULTIPLEX COMMUNICATION
- H04J3/00—Time-division multiplex systems
- H04J3/02—Details
- H04J3/06—Synchronising arrangements
- H04J3/0635—Clock or time synchronisation in a network
- H04J3/0638—Clock or time synchronisation among nodes; Internode synchronisation
- H04J3/0658—Clock or time synchronisation among packet nodes
- H04J3/0661—Clock or time synchronisation among packet nodes using timestamps
- H04J3/0667—Bidirectional timestamps, e.g. NTP or PTP for compensation of clock drift and for compensation of propagation delays
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The present invention relates to intelligent sound box fields, a kind of method, apparatus, terminal and storage medium that intelligent sound wakes up is disclosed, it include: the IP address and ID device number for obtaining intelligent sound terminal, the IP address is used to parse the information on services of the intelligent terminal, the information on services is for matching associated word, the ID device number intelligent sound terminal for identification;Associated keyword is configured according to the information on services, at least one keyword is configured according to the information on services;It is generated according to keyword described in one or more and replys language;The reply language is fed back into the intelligent sound terminal.The method that intelligent sound of the present invention wakes up can be according to place response user, and the keyword of response replied in language is rich and changeful, and intelligent response degree is high, enhances the experience sense of user.
Description
Technical field
The invention belongs to method, apparatus, terminal and storages that intelligent sound apparatus field more particularly to intelligent sound wake up
Medium.
Background technique
Intelligent sound box is a kind of speaker with Touch Screen, and user can run various Androids by built-in intelligence system
APP can be achieved with doing shopping, watch movie, plays game, listens to music by the click to touch screen.Intelligent sound box has customized call out
Awake word function, will automatically turn on speaker after receiving the voice for waking up word.Traditional intelligent sound box is usually used after waking up
" ding-dong ding-dong " the tinkle of bells of official's setting or other particular ringtones have specific revert statement response user, prompt to use
Family speaker has turned on.Reply language it is simple and single, cannot according to the use environment " adjusting to changed conditions " of intelligent sound box, be easy so that with
The family sense of hearing generates fatigue, influences user experience.
Summary of the invention
Technical solution disclosed by the invention is at least able to solve following technical problem: prior art intelligent sound box is upon awakening
The reply language of response user is single and immobilizes, cannot be according to the use environment response user of intelligent sound box, intelligent response journey
It spends low.
One or more embodiment of the invention discloses a kind of automatic reply method that intelligent sound wakes up, comprising:
The IP address and ID device number for obtaining intelligent sound terminal are believed according to the service that IP address parses the intelligent terminal
Breath, the information on services is for matching relative keyword, the ID device number intelligent sound terminal for identification;
At least one relative keyword is configured according to the information on services;It edits the keyword and generates corresponding reply language;
The reply language is fed back into the intelligent sound terminal.
In one or more embodiment of the invention, the information on services includes the time letter of the intelligent sound terminal
Breath, location information and Weather information, the location information are the address information of mapping of the latitude and longitude information on map, the day
Gas information is obtained according to the address information.
It is described to include: according to the corresponding language of replying of keyword generation in one or more embodiment of the invention
For the information on services sort key word, and its word order position in the reply language is set;
Its sentence element in the reply language is arranged in the part of speech of analysis of key word;
The sentence to be formed comprising the sentence element is selected from statement model library, and the Keywords matching is entered
In the correspondence word order position of the sentence to be formed.
In one or more embodiment of the invention, further includes:
Pass through the identity information of Application on Voiceprint Recognition user, the keyword of matching address user;Alternatively,
Age and gender by Application on Voiceprint Recognition user obtain the keyword of address user.
What one or more embodiment of the invention disclosed that a kind of intelligent sound wakes up automatically replies device, comprising:
Parsing module parses the intelligence according to IP address for obtaining the IP address and ID device number of intelligent sound terminal
Can terminal information on services, the information on services is for matching relative keyword, ID device number institute for identification
State intelligent sound terminal;
Configuration module, for configuring at least one relative keyword according to the information on services;
Editor module, for generating corresponding reply language according to the keyword;
Output module, for the reply language to be fed back to the intelligent sound terminal.
In one or more embodiment of the invention, the information on services of the parsing module parsing includes the intelligent language
Temporal information, location information and the Weather information of voice terminal, the location information are mapping of the latitude and longitude information on map
Address information, the Weather information are obtained according to the address information.
In one or more embodiment of the invention, the editor module includes:
Word order setting unit, for being directed to the information on services sort key word, and it is corresponding that the information on services is arranged
Word order position of the keyword in the reply language;
Ingredient determination unit determines its sentence element in the reply language for the part of speech of analysis of key word;
Generation unit, for selecting the sentence to be formed comprising the sentence element from statement model library, by institute
State Keywords matching enter it is described wait form in sentence.
In one or more embodiment of the invention, further includes:
User's matching module, for passing through the identity information of Application on Voiceprint Recognition user, the keyword of matching address user;Or
Person,
Age and gender by Application on Voiceprint Recognition user obtain the keyword of address user.
One or more embodiment of the invention discloses a kind of intelligent sound terminal, including memory, processor and
The computer program that can be run in the memory and on the processor is stored, the processor executes the computer
The step of automatic reply method that the intelligent sound as described in any one of Claims 1-4 wakes up is realized when program.
One or more embodiment of the invention discloses a kind of non-transient computer readable storage medium, described non-transient
Computer-readable recording medium storage has computer executable instructions, and the computer executable instructions are for controlling perform claim
It is required that the step of automatic reply method of any one of 1 to the 4 intelligent sound wake-up.
Detailed description of the invention
Fig. 1 is a kind of application scenarios schematic diagram of the embodiment of the present invention;
Fig. 2 is a kind of data flow schematic diagram of the embodiment of the present invention;
Fig. 3 is the flow chart of one embodiment of the invention the method;
Fig. 4 is another flow chart of one embodiment of the invention the method;
Fig. 5 is another flow chart of one embodiment of the invention the method;
Fig. 6 is another flow chart of one embodiment of the invention the method;
Fig. 7 is the specific flow chart of one embodiment of the invention the method;
Fig. 8 is the structure chart for automatically replying device that the intelligent sound provided in the embodiment of the present invention wakes up;
Fig. 9 is another structure chart for automatically replying device that the intelligent sound provided in the embodiment of the present invention wakes up;
Figure 10 is the schematic diagram of the intelligent sound terminal provided in the embodiment of the present invention.
Specific embodiment
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present invention
Attached drawing, technical scheme in the embodiment of the invention is clearly and completely described.Obviously, described embodiment is only
A part of the embodiment of the present invention gives presently preferred embodiments of the present invention instead of all the embodiments in attached drawing.The present invention can
To realize in many different forms, however it is not limited to embodiment described herein, on the contrary, provide the mesh of these embodiments
Be to make the disclosure of the present invention more thorough and comprehensive.
Unless otherwise defined, technical and scientific term all used in this specification is led with technology of the invention is belonged to
The normally understood meaning of the technical staff in domain is identical.Used term is only in the description of the invention in this specification
The purpose of description specific embodiment is not intended to the limitation present invention.It following claims, specification and says
Term "and/or" in bright book attached drawing includes any and all combinations of one or more related listed items." the
One ", " second ", " third " etc. are not use to describe a particular order for distinguishing different objects.
In addition, as long as technical characteristic involved in the various embodiments of the present invention described below is each other not
Constituting conflict can be combined with each other.
Fig. 1 is a kind of possible application field for the automatic reply method that the intelligent sound that the embodiment of the present invention one provides wakes up
Scape.The application scenarios include intelligent sound terminal 11, cloud server 12 and database 13, and intelligent sound terminal 11 is to service
Device 12 sends data flow 11a request identification and wakes up voice and obtain the keyword of response;The judgement of server 12 wakes up voice
The output of intelligent sound terminal 11 is fed back to generating a reply language after success and being included in data flow 11b.Intelligent language herein
Voice terminal 11 is for cloud server 12, and cloud connects database 13, cloud server by the cloud server 12
12 provide a variety of application services and handle the various requests of intelligent sound terminal 11.Cloud server 12 is multiple servers 121
One resource pool of composition.Each server herein can be a physical server or multiple physical servers it is virtual and
At a logical server.Server be also possible to it is multiple can interconnected communication server composition server zone, and it is each
Functional module can be respectively distributed on each server in server zone.
User 10 inputs to intelligent sound terminal 11 wakes up word wake-up intelligent sound terminal device, and cloud server 12 obtains
The IP address of equipment and the information on services for parsing the equipment, from database 13 match corresponding keyword and generate reply language it is anti-
It is fed to 11 voice output response user 10 of intelligent sound terminal, can be according to place response user, reply language is rich and changeful, intelligence
Response degree is high, enhances the experience sense of user.
Fig. 2 is the schematic diagram of one embodiment of the invention, and the data flow that intelligent sound wakes up answering method is shown in figure.
Wake request is sent from intelligent sound terminal to cloud server, cloud server handles its request and data, and parsing obtains phase
Application message is closed to configure keyword, and generates a reply language and feeds back to intelligent sound terminal speech casting response user.
Fig. 3 is the flow chart of one embodiment of the invention.The automatic reply method that intelligent sound of the present invention wakes up
Steps are as follows:
S31: obtaining the IP address and ID device number of intelligent sound terminal, is believed according to the service that IP address parses intelligent terminal
Breath, information on services is for matching relative keyword, ID device number intelligent sound terminal for identification;
S32: at least one relative keyword is configured according to information on services;
S33: editor's keyword generates corresponding reply language;
S34: language will be replied and feed back to intelligent sound terminal.
In step S31, the information on services parsed by IP address and extract intelligent sound equipment, information on services packet
Include the application message of the parsing of a plurality of intelligent sound equipment respective server 121, temporal information, position including intelligent sound terminal
Confidence breath and Weather information.Wherein, temporal information is according to Internet time synchronization server/common network time server
NTP Server is autosynchronous, including date and corresponding lunar calendar time, moment, week etc..Location information is longitude and latitude
The address information of mapping of the information on map, and Weather information is obtained according to address synchronizing information, including temperature information and day
Vaporous condition.
In step s 32, according to the data of the application message of extraction, and matching/configuration and application are believed in database 13
Cease relevant keyword, alternatively, network search according to information on services to configure relevant keyword, an information on services configure to
A few keyword, can be a variety of application message key contaminations, can also only include the pass of one of application message
Keyword.
Step S33 is comprehensive by these keywords, and editor is combined into a reply language, and fixed sentence can be set in cloud server
Keyword is inserted into corresponding position, or directly directly matches the reply language of setting in the database by formula.Replying can wrap in language
Containing weather forecast, according to the warm tip language of weather forecast analysis and according to the blessing of time analysis, greeting or prompt
Property sentence.
In embodiments of the present invention, too long to avoid replying language, influence the mood of user, reply language can for only comprising with
On one of application message sentence.Specifically, the number of words for replying language can be preset, and the preferential of application message is set
Grade is more than preset threshold, then retains the keyword of the high application message of priority if the sentence that keyword generates is too long.
Step S34 by the intelligent sound terminal 11 that the reply language that step S33 is edited feeds back to corresponding ID device number, by
Its output module voice output, response user.
The automatic reply method that intelligent sound described in the embodiment of the present invention wakes up, by the IP for obtaining intelligent sound equipment
Location and the server for parsing the equipment, obtain the application messages such as time, place, the weather of equipment, according to these application messages, match
Corresponding keyword is set, and keyword is compiled into reply language, it can be according to place response user, the pass of response replied in language
Keyword is rich and changeful, and intelligent response degree is high, enhances the experience sense of user.For example, when the time is National Day, it is available to arrive
The reply language of the blessing of " National Day is happy ", can local weather it is colder get " today, weather was colder, it should be noted that protect
It is warm " warm tip reply language.
In embodiments of the present invention, referring to Fig. 4, step S33 the method is specific as follows:
S41: being directed to information on services sort key word, and it is arranged and is replying the word order position in language.
The keyword in language comprising a variety of information on services is replied, corresponding message is obtained convenient for user, improves the body of user
Test sense.For the keyword of different information on services, it is set and is replying the word order position in language.For example, timeliness information is matched
The Keywords matching set such as time adverbial position, address information matching is as replied in the point adverbial in language.
S42: the part of speech of analysis of key word is arranged it and is replying the sentence element in language.
For the keyword of configuration, corresponding sentence element is determined it as, the type of sentence element includes subject, meaning
Language, object, attribute, complement, the adverbial modifier, predicative etc., such as the temporal information of collection terminal, it can be corresponding by temporal information
Keyword is determined as time adverbial, and for the keyword of location information configuration, information can be identified as to point adverbial etc..
S43: the sentence to be formed comprising corresponding sentence element is selected from statement model library, Keywords matching is entered
In the corresponding word order position of sentence to be formed.
Wherein, statement model library stores a large amount of sentence structures, includes one or more sentence elements in sentence structure
Sentence to be formed.Sentence to be formed is the incomplete revert statement for having lacked information on services, and for matching, insertion is corresponding to be closed
Keyword generates complete reply language.Reply language is edited by matching keywords and sentence to be formed, keyword is inserted into group
At the corresponding position in sentence.Information on services is inserted by keyword and is replied in language.
Refering to Fig. 5, step S32 the method is specific as follows:
S51: keyword relevant to weather is configured according to Weather information;
Local weather information according to address acquisition of information includes minimum, highest, average and real time temperature and weather conditions,
The Weather information voice can be broadcasted, while configure keyword relevant to Weather information.Such as according to the highest of acquisition, lowest temperature
Degree, mean temperature configure associative key: judging temperature height according to temperature judgment criteria, conclude that weather is cold and hot.When being cold,
The keywords such as " wearing the clothes ", " warming " can be configured, when being warm, the words such as configuration " sunstroke prevention ", " attention ".According to weather condition
" rainfall ", " band umbrella " keyword can be configured when configuring keyword relevant to weather condition, such as raining.
S52: corresponding greeting is configured according to temporal information.
In embodiments of the present invention, by obtaining the current time of server, judge whether the acquired date is that section is false
Day or the technical dates of user setting, wherein technical dates is generally the date of family's holiday or user's typing.If obtaining
Date is technical dates, configures corresponding blessing language or obtains the reply language in local typing.Otherwise, matched according to the specific moment
Related greeting is set, can such as be configured according to rule once:
5:00-12:00 --- good morning, 12:00-18:00 --- and good afternoon, 18:00-5:00 --- and good night.
Illustratively, the intelligent sound device service information of acquisition is: the time: October 1 10:40:35, address: Shenzhen
City, Weather information: fine day, temperature are 19-29 degrees Celsius.Its date is special holidays, and General System is set with corresponding time
Multiple language, such as " National Day is happy ".It otherwise, is the greeting that the corresponding period is configured according to time information in the usual date,
The relevant keyword of life habit is configured according to weather conditions, is judged according to temperature information cold and hot.
In embodiments of the present invention, one is generally only selected for keyword used by every kind of application message, therefore, needed
A keyword is selected in multiple keywords according to the Data Matching of application message, the foundation of the selection can be these
The probability that keyword uses in the database is selected to the probability of generated statement, or can also be based on user language
Habit.
Refering to Fig. 6, the embodiment of the present invention the method also includes:
S61: pass through the identity information of Application on Voiceprint Recognition user, the keyword of matching address user;
S62: age and gender by Application on Voiceprint Recognition user obtain the keyword of address user.
In the specific implementation, the method for the invention realizes that targetedly appellation is fed back.Such as:
For the user in typing cloud, the identity information of user, the key of matching address user are gone out by Application on Voiceprint Recognition
Word.Age and gender for the user in non-typing cloud, by Application on Voiceprint Recognition user.When the user detected is children,
Configure the honorific title keyword to children;When the user detected is male and is not children, the honorific title configured to male is crucial
Word;When the user detected is women and is not children, the honorific title keyword to women is configured.
In embodiments of the present invention, the identity information of multiple users is stored on intelligent sound terminal device.User wakes up
When can Auto-matching user information, obtain its address.It matches unsuccessful user and judges user's according to the vocal print feature of user
Gender and age, to obtain corresponding address.When the user detected is children, the honorific title keyword to children is configured,
Such as " child ";When the user detected is male and is not children, the honorific title keyword to male is configured, such as " sir ";
When the user detected is women and is not children, the honorific title keyword to women is configured, such as " Ms ".
In another embodiment, referring to FIG. 7, the embodiment of the present invention provides a kind of automatically replying for intelligent sound wake-up
Method, comprising:
S71: it obtains the wake-up voice of intelligent sound equipment and identifies.
S72: identifying the identity information of user, and matching user is typing user? if so, step S721 is executed, it is no
Then, step S722 is executed.S721: the keyword for address of preservation is obtained.
S722: by the age of Application on Voiceprint Recognition user, judging whether user is children, if so, executing step S723, otherwise
Execute step S724;
S723: the honorific title keyword to children is configured.
S724: step S725 is executed if women by the gender of Application on Voiceprint Recognition user, otherwise, executes step S726;
S725: the honorific title keyword to women is configured.
S726: the honorific title keyword to male is configured.
S73: the IP address and ID device number of intelligent sound equipment are obtained, is parsed by IP address and extracts intelligent sound and set
Standby information on services, information on services include the application message of a plurality of intelligent sound equipment respective server;ID device number is for knowing
Other intelligent sound equipment.
S74: keyword relevant to weather is configured according to Local Weather Report.
S75: judging whether the date is the special date, no to then follow the steps S751 if executing step S752;
S751: according to the keyword of the related greeting property of configuration constantly.
S752: specific reply language is directly generated, step S77 is executed.
S76: it is generated according to one or more keyword editor and replys language.
S77: language will be replied and feed back to intelligent sound terminal.
In some embodiment of the invention, city where server end 12 parses it by the IP address of intelligent sound equipment
Weather information generates according to the keyword that weather configures relevant life habit and replys language to remind user, or directly plays
The Weather information of acquisition.Technical dates can also be set and configure corresponding reply language, be used for quick response user, enhancing intelligence
The intelligence of voice terminal 11.Wherein, technical dates is either user oneself set-up date in red-letter day of system typing.User
10 when wake-up intelligent sound terminal 11, will directly acquire corresponding reply language response user 10 on the day of technical dates.Non-
Technical dates user then configures keyword according to the information on services of acquisition and generates corresponding reply language response user.
Illustratively, a certain user is to be set as " XXX " to the address of oneself, has waken up intelligent sound on the no special date
Terminal 11, the Weather information of acquisition are rain, and a kind of reply language that server end 12 generates is that " XXX, good morning, and today has small
Rain is in and me is accompanied to play ".If user is the intelligent sound terminal 11 waken up in its birthday, replying language may be " XXX, birthday
It is happy ".If smart machine does not identify the user, its gender and age will be judged according to the sound of user to call the use
Family.Certainly, a kind of this probability happened is very little.
In embodiments of the present invention, also voice can be waken up by analysis, to obtain the emotional state of user, prompts user
Carry out mood regulation.Or output talking in professional jargon property sentence, alleviate the mood of user, effectively can adjust and manage user emotion, mention
Health index is risen, security risk is reduced.
What another embodiment of the present invention disclosed that a kind of intelligent sound wakes up automatically replies device.With reference to Fig. 8, for the present invention
Another embodiment in a kind of structure for automatically replying device that wakes up of intelligent sound.What the intelligent sound illustrated in Fig. 8 woke up
Automatically reply the automatic reply method that device wakes up corresponding to the intelligent sound in above-described embodiment.As shown in figure 8, the intelligence language
It includes parsing module 81, configuration module 82, editor module 83 and output module 84 that sound woke up, which automatically replies device 8,.Wherein, it solves
Analyse the realization function of module 81, configuration module 82, editor module 83 and output module 84 step corresponding with above-described embodiment
It corresponds, to avoid repeating, the present embodiment is not described in detail one by one.
Parsing module 81, for obtaining the IP address and ID device number of intelligent sound terminal, the IP address is for parsing
The information on services of the intelligent terminal, the information on services is for matching associated word, and the ID device number is for knowing
The not described intelligent terminal.Parsing module 81 is mainly used for:
The address information of the intelligent sound terminal is parsed by IP address;
Parse the Weather information of the address information, including highest, minimum temperature, mean temperature, weather condition;
Parse time server, acquisition time information, including date, week, moment
Configuration module 82, for configuring associated keyword, the information on services configuration according to the information on services
At least one keyword;
Editor module 83 replys language for generating one or more described keyword;
Output module 84, for the reply language to be fed back to the intelligent sound terminal.
Refering to Fig. 9, the editor module 83 includes:
Configuration module 91, for configuring at least one relative keyword according to the information on services;
Editor module 92, for generating corresponding reply language according to the keyword;
Output module 93, for the reply language to be fed back to the intelligent sound terminal.
In one or more embodiments of the present invention, what intelligent sound woke up automatically replies device further include:
User's matching module 85, for passing through Application on Voiceprint Recognition user, the keyword of matching address user;Alternatively,
Age and gender by Application on Voiceprint Recognition user obtain the keyword of address user.
What intelligent sound described in the embodiment of the present invention woke up automatically replies device, and parsing module 81 is by obtaining intelligent sound
The IP address of equipment and the server for parsing the equipment, obtain the application messages such as time, place, the weather of equipment, configuration module
82, according to these application messages, configure corresponding keyword, and keyword is compiled reply language, there is output module by editor module 83
84 in corresponding ID device number voice output.What intelligent sound of the present invention woke up, which automatically replies device, to answer according to place
User is answered, the keyword of response replied in language is rich and changeful, and intelligent response degree is high, enhances the experience sense of user.
One or more embodiment of the invention discloses a kind of intelligent sound terminal, and Figure 10 is intelligent language in the present embodiment
The schematic diagram of voice terminal.As shown in Figure 10, intelligent sound terminal 7 includes processor 70, memory 71 and is stored in memory
In 71 and the computer program that can be run on processor 70.Processor 70 realizes intelligence in embodiment 1 when executing computer program
The each step for the automatic reply method that energy voice wakes up, such as step S20-S23 shown in Fig. 2.Alternatively, processor 70 executes
The function of automatically replying each module/unit of device that intelligent sound wakes up in above-described embodiment, such as Fig. 8 are realized when computer program
Shown parsing module 81, configuration module 82, the function of editor module 83 and output module 84.
Illustratively, computer program can be divided into one or more module/units, and one or more module/
Unit is stored in memory 71, and is executed by processor 70, to complete the present invention.One or more module/units can be with
It is the series of computation machine program instruction section that can complete specific function, the instruction segment is for describing computer program in intelligent language
Implementation procedure in voice terminal 7.For example, computer program can be divided into synchronization module, summarizing module, obtain module, returns
It returns module (module in virtual bench).
Intelligent sound terminal 7 may include, but be not limited only to, processor 70, memory 71.Those skilled in the art can manage
Solution, Figure 10 is only the example of intelligent sound terminal 7, does not constitute the restriction to intelligent sound terminal 7, may include than diagram
More or fewer components perhaps combine certain components or different components, such as intelligent sound terminal can also include defeated
Enter output equipment, network access equipment, bus etc..
Alleged processor 70 can be central processing unit (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor
Deng.
Memory 71 can be the internal storage unit of intelligent sound terminal 7, such as the hard disk or interior of intelligent sound terminal 7
It deposits.Memory 71 is also possible to the grafting being equipped on the External memory equipment of intelligent sound terminal 7, such as intelligent sound terminal 7
Formula hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card, flash card
(Flash Card) etc..Further, memory 71 can also both including intelligent sound terminal 7 internal storage unit and also including
External memory equipment.Memory 71 is for other programs and data needed for storing computer program and intelligent sound terminal.
Memory 71 can be also used for temporarily storing the data that has exported or will export.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function
Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different
Functional unit, module are completed, i.e., the internal structure of described device is divided into different functional unit or module, more than completing
The all or part of function of description.Each functional unit in embodiment, module can integrate in one processing unit, can also
To be that each unit physically exists alone, can also be integrated in one unit with two or more units, it is above-mentioned integrated
Unit both can take the form of hardware realization, can also realize in the form of software functional units.In addition, each function list
Member, the specific name of module are also only for convenience of distinguishing each other, the protection scope being not intended to limit this application.Above system
The specific work process of middle unit, module, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment
The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
The scope of the present invention.
In embodiment provided by the present invention, it should be understood that disclosed device/intelligent sound terminal and method,
It may be implemented in other ways.For example, device described above/intelligent sound terminal embodiment is only schematic
, for example, the division of the module or unit, only a kind of logical function partition, can there is other draw in actual implementation
The mode of dividing, such as multiple units or components can be combined or can be integrated into another system, or some features can be ignored,
Or it does not execute.Another point, shown or discussed mutual coupling or direct-coupling or communication connection can be by one
The INDIRECT COUPLING or communication connection of a little interfaces, device or unit can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated module/unit be realized in the form of SFU software functional unit and as independent product sale or
In use, can store in a computer readable storage medium.Based on this understanding, the present invention realizes above-mentioned implementation
All or part of the process in example method, can also instruct relevant hardware to complete, the meter by computer program
Calculation machine program can be stored in a computer readable storage medium, the computer program when being executed by processor, it can be achieved that on
The step of stating each embodiment of the method.Wherein, the computer program includes computer program code, the computer program generation
Code can be source code form, object identification code form, executable file or certain intermediate forms etc..The computer-readable medium
It may include: any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic that can carry the computer program code
Dish, CD, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM,
Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that described
The content that computer-readable medium includes can carry out increasing appropriate according to the requirement made laws in jurisdiction with patent practice
Subtract, such as in certain jurisdictions, according to legislation and patent practice, computer-readable medium do not include be electric carrier signal and
Telecommunication signal.
One or more embodiment of the invention discloses a kind of non-transient computer readable storage medium, described non-transient
Computer-readable recording medium storage has computer executable instructions, and the computer executable instructions are for controlling described in execution
The step another embodiment of the present invention for the automatic reply method that intelligent sound wakes up discloses that a kind of non-transient computer is readable to deposit
Storage media, the non-transient computer readable storage medium are stored with computer executable instructions, and the computer is executable to be referred to
It enables for controlling the automatic reply method for executing any one of the above intelligent sound and waking up.
When the technical solution in above-mentioned each embodiment uses software realization, above-mentioned each embodiment can will be realized
Computer instruction and/or data storage in computer-readable medium or as on readable medium one or more instructions or
Code is transmitted.Computer-readable medium includes computer storage media and communication media, and wherein communication media includes being convenient for
From a place to any medium of another place transmission computer program.Storage medium can be what computer can store
Any usable medium.As example but be not limited to time: computer-readable medium may include RAM, ROM, EEPROM, CD-ROM or
Other optical disc storages, magnetic disk storage medium or other magnetic storage apparatus or can carry or store with instruction or data
The desired program code of structure type simultaneously can be by any other medium of computer access.In addition, any connection can fit
When become computer-readable medium.For example, if software is using coaxial cable, light pricker optical cable, twisted pair, Digital Subscriber Line
(DSL) either the wireless technology of such as infrared ray, radio and microwave etc is transmitted from website, server or other remote sources
, then the wireless technology packet of coaxial cable, optical fiber cable, twisted pair, DSL or such as infrared ray, wireless and microwave etc
It includes in the definition of affiliated medium.
The foregoing is merely a prefered embodiment of the invention, is not intended to limit the scope of the patents of the invention, although referring to aforementioned reality
Applying example, invention is explained in detail, still can be to aforementioned each tool for coming for those skilled in the art
Technical solution documented by body embodiment is modified, or carries out equivalence replacement to part of technical characteristic.All benefits
The equivalent structure made of description of the invention and accompanying drawing content is directly or indirectly used in other related technical areas,
Similarly within the invention patent protection scope.
Claims (10)
1. the automatic reply method that a kind of intelligent sound wakes up characterized by comprising
The IP address and ID device number for obtaining intelligent sound terminal, the information on services of the intelligent terminal is parsed according to IP address,
The information on services is for matching relative keyword, the ID device number intelligent sound terminal for identification;Root
At least one relative keyword is configured according to the information on services;It edits the keyword and generates corresponding reply language;It will
The reply language feeds back to the intelligent sound terminal.
2. the automatic reply method that intelligent sound according to claim 1 wakes up, which is characterized in that the information on services packet
Include temporal information, location information and the Weather information of the intelligent sound terminal, wherein the location information is latitude and longitude information
The address information of mapping on map, the Weather information are obtained according to the address information is synchronous.
3. the automatic reply method that intelligent sound according to claim 2 wakes up, which is characterized in that described according to the pass
Keyword generates corresponding language of replying
For the information on services sort key word, and its word order position in the reply language is set;
Its sentence element in the reply language is arranged in the part of speech of analysis of key word;
The sentence to be formed comprising the sentence element is selected from statement model library, the Keywords matching is entered described
In the correspondence word order position of sentence to be formed.
4. the automatic reply method that intelligent sound according to claim 1 wakes up, which is characterized in that further include:
Pass through the identity information of Application on Voiceprint Recognition user, the keyword of matching address user;Alternatively,
Age and gender by Application on Voiceprint Recognition user obtain the keyword of address user.
5. what a kind of intelligent sound woke up automatically replies device characterized by comprising
Parsing module, for obtaining the IP address and ID device number of intelligent sound terminal, eventually according to the IP address parsing intelligence
The information on services at end, the information on services is for matching relative keyword, the ID device number intelligence for identification
It can voice terminal;
Configuration module, for configuring at least one relative keyword according to the information on services;
Editor module, for generating corresponding reply language according to the keyword;
Output module, for the reply language to be fed back to the intelligent sound terminal.
6. what intelligent sound according to claim 5 woke up automatically replies device, which is characterized in that the parsing module solution
The information on services of analysis includes temporal information, location information and the Weather information of the intelligent sound terminal, and the location information is
The address information of mapping of the latitude and longitude information on map, the Weather information are obtained according to the address information.
7. what intelligent sound according to claim 5 woke up automatically replies device, which is characterized in that the editor module packet
It includes:
For being directed to the information on services sort key word, and the corresponding key of the information on services is arranged in word order setting unit
Word order position of the word in the reply language;
Ingredient determination unit determines its sentence element in the reply language for the part of speech of analysis of key word;
Generation unit, for selecting the sentence to be formed comprising the sentence element from statement model library, by the pass
Keyword fits into described wait form in sentence.
8. what intelligent sound according to claim 5 woke up automatically replies device, which is characterized in that further include:
User's matching module, for passing through the identity information of Application on Voiceprint Recognition user, the keyword of matching address user;Alternatively,
Age and gender by Application on Voiceprint Recognition user obtain the keyword of address user.
9. a kind of intelligent sound terminal, including memory, processor and storage are in the memory and can be in the processing
The computer program run on device, which is characterized in that the processor realizes such as claim 1 when executing the computer program
The automatic reply method waken up to any one of 4 intelligent sounds.
10. a kind of non-transient computer readable storage medium, which is characterized in that the non-transient computer readable storage medium is deposited
Computer executable instructions are contained, the computer executable instructions require any one of 1 to 4 intelligence for controlling perform claim
The automatic reply method that energy voice wakes up.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810213878.1A CN110275691A (en) | 2018-03-15 | 2018-03-15 | Automatic reply method, device, terminal and the storage medium that intelligent sound wakes up |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810213878.1A CN110275691A (en) | 2018-03-15 | 2018-03-15 | Automatic reply method, device, terminal and the storage medium that intelligent sound wakes up |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110275691A true CN110275691A (en) | 2019-09-24 |
Family
ID=67958425
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810213878.1A Pending CN110275691A (en) | 2018-03-15 | 2018-03-15 | Automatic reply method, device, terminal and the storage medium that intelligent sound wakes up |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110275691A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111177494A (en) * | 2019-12-27 | 2020-05-19 | 北京天译科技有限公司 | Semantic analysis method in voice interaction based on weather |
CN112395435A (en) * | 2020-11-17 | 2021-02-23 | 华北电力大学扬中智能电气研究中心 | Multimedia resource recommendation method, device, equipment and medium |
CN112420038A (en) * | 2020-10-28 | 2021-02-26 | 深圳创维-Rgb电子有限公司 | Intelligent voice broadcasting method and device capable of self-adapting scene judgment |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103167174A (en) * | 2013-02-25 | 2013-06-19 | 广东欧珀移动通信有限公司 | Output method, device and mobile terminal of mobile terminal greetings |
CN105049500A (en) * | 2015-06-27 | 2015-11-11 | 广东天际电器股份有限公司 | Intelligent small household appliance system capable of identifying user geographic position and collecting user health information and application |
CN105407160A (en) * | 2015-11-27 | 2016-03-16 | 小米科技有限责任公司 | Interface display method and device |
CN106297780A (en) * | 2015-06-03 | 2017-01-04 | 深圳市轻生活科技有限公司 | A kind of voice interactive method and system and Intelligent voice broadcasting terminal |
CN107134279A (en) * | 2017-06-30 | 2017-09-05 | 百度在线网络技术(北京)有限公司 | A kind of voice awakening method, device, terminal and storage medium |
CN107146611A (en) * | 2017-04-10 | 2017-09-08 | 北京猎户星空科技有限公司 | A kind of voice response method, device and smart machine |
CN107463684A (en) * | 2017-08-09 | 2017-12-12 | 珠海市魅族科技有限公司 | Voice replying method and device, computer installation and computer-readable recording medium |
CN107564517A (en) * | 2017-07-05 | 2018-01-09 | 百度在线网络技术(北京)有限公司 | Voice awakening method, equipment and system, cloud server and computer-readable recording medium |
-
2018
- 2018-03-15 CN CN201810213878.1A patent/CN110275691A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103167174A (en) * | 2013-02-25 | 2013-06-19 | 广东欧珀移动通信有限公司 | Output method, device and mobile terminal of mobile terminal greetings |
CN106297780A (en) * | 2015-06-03 | 2017-01-04 | 深圳市轻生活科技有限公司 | A kind of voice interactive method and system and Intelligent voice broadcasting terminal |
CN105049500A (en) * | 2015-06-27 | 2015-11-11 | 广东天际电器股份有限公司 | Intelligent small household appliance system capable of identifying user geographic position and collecting user health information and application |
CN105407160A (en) * | 2015-11-27 | 2016-03-16 | 小米科技有限责任公司 | Interface display method and device |
CN107146611A (en) * | 2017-04-10 | 2017-09-08 | 北京猎户星空科技有限公司 | A kind of voice response method, device and smart machine |
CN107134279A (en) * | 2017-06-30 | 2017-09-05 | 百度在线网络技术(北京)有限公司 | A kind of voice awakening method, device, terminal and storage medium |
CN107564517A (en) * | 2017-07-05 | 2018-01-09 | 百度在线网络技术(北京)有限公司 | Voice awakening method, equipment and system, cloud server and computer-readable recording medium |
CN107463684A (en) * | 2017-08-09 | 2017-12-12 | 珠海市魅族科技有限公司 | Voice replying method and device, computer installation and computer-readable recording medium |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111177494A (en) * | 2019-12-27 | 2020-05-19 | 北京天译科技有限公司 | Semantic analysis method in voice interaction based on weather |
CN112420038A (en) * | 2020-10-28 | 2021-02-26 | 深圳创维-Rgb电子有限公司 | Intelligent voice broadcasting method and device capable of self-adapting scene judgment |
CN112395435A (en) * | 2020-11-17 | 2021-02-23 | 华北电力大学扬中智能电气研究中心 | Multimedia resource recommendation method, device, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11295221B2 (en) | Learning user preferences in a conversational system | |
US10853582B2 (en) | Conversational agent | |
WO2018036555A1 (en) | Session processing method and apparatus | |
US11475897B2 (en) | Method and apparatus for response using voice matching user category | |
CN108551766B (en) | Natural language processing for session establishment with service provider | |
US9130900B2 (en) | Assistive agent | |
US20190033957A1 (en) | Information processing system, client terminal, information processing method, and recording medium | |
CN107146611B (en) | Voice response method and device and intelligent equipment | |
US11159462B2 (en) | Communication system and communication control method | |
CN110446057A (en) | Providing method, device, equipment and the readable medium of auxiliary data is broadcast live | |
CN110188177A (en) | Talk with generation method and device | |
CN109829039A (en) | Intelligent chat method, device, computer equipment and storage medium | |
US11074916B2 (en) | Information processing system, and information processing method | |
CN108628921A (en) | Unsolicited content is actively incorporated into human-computer dialogue | |
CN109145104A (en) | For talking with interactive method and apparatus | |
CN106847278A (en) | System of selection and its mobile terminal apparatus and information system based on speech recognition | |
CN110275691A (en) | Automatic reply method, device, terminal and the storage medium that intelligent sound wakes up | |
CN107247769A (en) | Method for ordering song by voice, device, terminal and storage medium | |
CN102497391A (en) | Server, mobile terminal and prompt method | |
US10943603B2 (en) | Systems and methods for a neighborhood voice assistant | |
CN108475282A (en) | Communication system and communication control method | |
CN108038243A (en) | Music recommends method, apparatus, storage medium and electronic equipment | |
CN110222256A (en) | A kind of information recommendation method, device and the device for information recommendation | |
CN109671435A (en) | Method and apparatus for waking up smart machine | |
CN108920657A (en) | Method and apparatus for generating information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190924 |